The launch of ChatGPT created a tremendous buzz over the potential of what is known as generative artificial intelligence (AI). It has raised concerns that progress in this field could replace the work being done by humans in many areas of endeavour. This is even though much of the content generated by ChatGPT tends to be somewhat mundane and even has a fictional element. Surprising as it may seem, essays churned out by it can even manufacture false data to suit the conclusions. Even so, the system has been tapped in a big way by millions all over the world especially students seeking to create content without making any academic effort.
Other Big Tech firms like Google have also entered the fray with their versions of generative AI such as Bard. At the same time, questions are being raised over ethical issues related to the progress of generative AI, with some like Elon Musk even declaring there is a danger to humanity in its rapid evolution.
It is against this backdrop that one must view the recent uproar over the sacking of Sam Altman as CEO of OpenAI which had launched ChatGPT. Now the dust has settled and Altman is not only back but well and truly in charge with a newly anointed board of his choice, one has to wonder if the race to develop AI by Big Tech needs to take a pause. To take a step back as it were, and look more carefully at the potential of a scientific development that may well change the course of human history.
One must first recap the Altman episode. It seems the non-profit board of OpenAI suddenly decided to remove the CEO on grounds of lack of adequate communication. Initially, this seemed a puzzling reason but reports later emanating from the San Francisco-based company indicated that some developments in the field of artificial general intelligence (AGI) may have been of concern to the board members.
Even so, it was an abrupt and arbitrary move especially as key investors like Microsoft had no inkling of the plan. It must be noted that OpenAI has a unique governance model with the board of the original non-profit venture having control of the profit arm that has nearly 50 percent shareholding by Microsoft.
The outcome of the sudden decision to remove the CEO was Microsoft offering to bring Altman on board and create a new AI research team while OpenAI went ahead to appoint two CEOs in succession. The final incumbent was co-founder of Twitch, Emmett Shear who vowed to investigate the reasons for Altman's sacking. Developments then moved rapidly as most of the company’s 770 employees declared their intent to depart en masse to Microsoft. At the end of the five-day drama, Sam Altman was reinstated as CEO, and three new board members were brought in including former Federal Reserve chief, Larry Summers.
The back story of this bizarre incident seems to be the development of a new AI model called - in proper science fiction style - Q* (Q star). One of its startling capabilities has reportedly been to solve basic mathematical problems, a feat so far not achieved by any other AI model. There is no official confirmation of this development from OpenAI itself barring many stories circulating on the internet. Yet it is clear from some of the technical explanations in online tech magazines that there has been previous research on what is known as “Q learning”.
For the layman, the new capabilities of the Q star can be envisaged as the difference between systems that rely only on data from human sources and the ability to think more creatively. For instance, one publication has described existing algorithms as being like a robot in a maze that relies on directions from humans to move left or right. But the Q star is like a robot that would try different routes to move towards the exit. This is indeed a huge leap for AI though many technical experts would still argue that it has little scope to harm humanity in the future.
Yet from a layman’s point of view, the capability of an AI model to make decisions without being guided at every step is a scary scenario. Adding to it is Altman’s reported comments that artificial general intelligence could be described as a “median human who could be hired as a co-worker” alongside humans. This seems an excessively casual approach to an issue that raises many ethical dilemmas. It is surely time for Big Tech to pause the breakneck speed at which AI research is being carried out competitively. A more nuanced approach to the development of such path-breaking systems now needs to be taken.
In India too, a country that has one of the largest pools of AI engineers, discussions need to be carried out on ways to regulate and monitor this cutting-edge technology. Significantly, the Google representative here has recently highlighted the need for “guardrails” around a technology that needs all companies to be responsible players, not just one or two.
Thus the issue relating to OpenAI and the removal of a charismatic CEO has put the focus on the ramifications of moving forward too rapidly in the field of AI. The ethical questions relating to whether the research in this area could create an entity smarter than human beings need answers now rather than later. Companies in this area need to be more transparent so that developments like the Q star model are made known to the public at large. AI is certainly a boon to the world. But it also has negative aspects and this needs to be considered carefully before rushing ahead with new developments.
Sushma Ramachandaran is a senior journalist who writes on finance and economy