AI’s next great challenge is understanding the nuances of language. Hard to do because language is more than just words, it’s also meaning, intent, and context.
HBR lists seven skills that can’t be automated. They involve communications, people management, teaching, and ethics. I wouldn’t be quite so certain or smug about the prognostication though. Not that AI will evolve to become more human like but rather that humans may devolve to become more AI-like..
Google’s Talk to Books uses AI to search through over 150,000 books to answer your questions. I asked about the answer to life and got existentialist answers.
In a word, each man is questioned by life; and he can only answer to life by answering for his own life; to life he can only respond by being responsible.
That, and 42. Was not disappointed.
China wants to shape the global future of artificial intelligence and one way it plans to do that is by setting the technical standards.
CIMON, or Crew Interactive Mobile Companion, is a floating head AI assistant powered by IBM Watson.
…and Big Brother is the machines. Alibaba is planning to wire the entire world with AI.
AI disasters likened to natural disasters. But how do you reasonably assess the risk? (With AI, probably. Snigger)
IBM Research titles this article Mitigating Bias in AI Models, quite generic and laudable a goal, but the real story behind it is more interesting. Apparently, women, and particularly women of color, were underrepresented in facial recognition training models, leading to more errors. So, ultimately, it’s about bias in the training data.
From MIT Technology Review: algorithms are making American inequality worse. My takeaway from this piece is not that it is the algorithms themselves but the thought processes behind them. Algorithms simply optimize the achievement of the goals you set out for. Based on unfair and unjust starting premises, algorithms just make the injustice more efficient.
From IBM Research: Automated analysis of free speech predicts psychosis onset in high-risk youths. This technique uses AI to analyze speech patterns of clinical high risk youths to mark for “poverty of speech” and “flight of ideas.”