On the one hand, I think OCR, text to voice, image to text, … has improved quite a lot.
On the other hand more and more stuff is locked away in apps, and javascript blob websites, so I can imagine it’s harder for accessibility tools to access information.
But I’m just guessing. Do any of you know first or second hand?
You can literally just give an AI access to your camera now and it can describe the world around you. Complex scenes, facial expressions, handwritten notes, etc. auto generated subtitles have gotten a lot better, and speech can be converted to ASL in real time or as subtitles displayed on a set of smart glasses, so much stuff. All that stuff that was locked away is now unlocked.
From my perspective making software, it’s improved. Prior to 2016, I had to be the one pushing WCAG standards. After 2016 it’s been explicit customer facing criteria managed by product managers.