|
Navigation
Search
|
Is Microsoft’s ‘Humanist Superintelligence’ vision more than an empty slogan?
Tuesday November 25, 2025. 07:00 AM , from ComputerWorld
There are few things tech companies like more than rolling out marketing-tested slogans that sound like cutting-edge breakthroughs, but turn out to be nothing more than stale, old wine in new bottles. It’s a lot easier to roll out a slogan than do the hard work of creating something new.
So, it’s difficult not to be cynical about Microsoft’s announcement this month that it’s forging a new path in AI — what it calls “Humanist Superintelligence (HSI).” It bears all the earmarks of sloganeering by coupling the AI buzzword “superintelligence” with the society-centered word “humanist.” The HSI vision was laid out in a blog post by Mustafa Suleyman, Microsoft AI CEO and executive vice president. He was a co-founder and former head of Applied AI at the AI company Deep Mind, which was bought by Google for between $400 million and $650 million. He then founded Inflection AI before making the move to Microsoft. Suleyman is clearly a technologist more than he is a sloganeer. Still, the question remains: Is “Humanist Superintelligence” just hype, or is there something groundbreaking in what he’s proposing? To answer that, let’s delve into the plans he laid out in his announcement. Putting humanity first — and pushing back against AGI To understand Suleyman’s post, you need to understand the current Holy Grail of most AI researchers and tech executives — AGI, an acronym for Artificial General Intelligence. AGI is the ability of a machine to reason like a human being, on a kind of superhuman scale. A machine that had achieved AGI would be able to work on just about any task, be able to adapt to new situations without needing training and would have the autonomy to learn and take actions without human intervention. AGI’s backers promise many benefits they believe the technology would bring to humankind — although many also acknowledge that without the proper guardrails, AGI could also become an existential danger to humankind. Suleyman’s vision directly pushes back against AGI. He writes HSI will “solve real concrete problems and do it in such a way that it remains grounded and controllable. We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity. In doing this, we reject narratives about a race to AGI.” HSI is not one-size-fits-all like AGI, but instead a series of AI-based technologies, each pointed at solving an important problem and aimed at bettering people’s lives. In his description, Suleyman takes a swipe at tech execs and researchers who care more about developing new technologies than about how those technologies harm or help people. He writes: “I think we technologists need to do a better job of imagining a future that most people in the world actually want to live in…. Instead of being designed to beat all humans at all tasks and dominate everything, HSI begins rooted in specific societal challenges that improve human well-being.” All this can sound high-minded and vague, so he provides details on where Microsoft will focus its first HSI work. The company has already begun on what he calls Medical Superintelligence. Next, he says, will be work on designing plentiful, clean, inexpensive energy. Suleyman claims HSI will be safe from the get-go, in contrast to AGI’s potential dangers. He calls HSI “a subordinate, controllable AI, one that won’t, that can’t open a Pandora’s Box. Contained, value aligned, safe — these are basics, but not enough. HSI keeps humanity in the driving seat, always.” Is HSI for real or just more hype? All that sounds impressive. But words are cheap. Is HSI just an elevated example of tech hype? A look at Suleyman’s past offers some clues. He’s not a Johnny-come-lately to warning about AI’s potential dangers. His book, The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, warns about the evil AI can do unless it is reigned in, including building autonomous weapons and bioengineering pathogens. He argues that global regulations are required to stop those and other dangers. In addition, at Deep Mind he established an Ethics and Society unit to scrutinize the potentially harmful aspects of AI, and take steps to ameliorate them. Keep in mind, HSI doesn’t conflict with making big profits. In fact, the opposite is true. So far, general-purpose generative AI (genAI), a forerunner to AGI, hasn’t paid off so well. And there’s some evidence it may never pay out. A McKinsey report warns: “Nearly eight in 10 companies report using gen AI — yet just as many report no significant bottom-line impact.” An MIT report, The GenAI Divide: State of AI in Business 2025, has found that 95% of genAI pilots in businesses are failing. Many people believe that the big-money in AI isn’t in genAI, but in special-purpose uses such as Suleyman suggests. Gary Marcus, a founder of two AI companies, argues in a New York Times opinion piece, “If the strengths of AI are truly to be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools and instead concentrate on narrow, specialized AI tools engineered for particular problems.” Suleyman’s vision hasn’t yet bumped up against bottom-line considerations— how much profit can Microsoft wring from it? So, it’s too early to tell whether HSI will prove to be anything more than a grand vision unfulfilled. However, his goals are worthy ones. I’m hoping the world lets him accomplish them.
https://www.computerworld.com/article/4093970/is-microsofts-humanist-superintelligence-vision-more-t...
Related News |
25 sources
Current Date
Nov, Tue 25 - 09:23 CET
|







