News

Google Tests an A.I. Assistant That Offers Life Advice

Earlier this year, Google, locked in an accelerating competition with rivals like Microsoft and OpenAI to develop A.I. technology, was looking for ways to put a charge into its artificial intelligence research.

So in April, Google merged DeepMind, a research lab it had acquired in London, with Brain, an artificial intelligence team it started in Silicon Valley.

Four months later, the combined groups are testing ambitious new tools that could turn generative A.I. — the technology behind chatbots like OpenAI’s ChatGPT and Google’s own Bard — into a personal life coach.

Google DeepMind has been working with generative A.I. to perform at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions and tutoring tips, according to documents and other materials reviewed by The New York Times.

The project was indicative of the urgency of Google’s effort to propel itself to the front of the A.I. pack and signaled its increasing willingness to trust A.I. systems with sensitive tasks.

The capabilities also marked a shift from Google’s earlier caution on generative A.I. In a slide deck presented to executives in December, the company’s A.I. safety experts had warned of the dangers of people becoming too emotionally attached to chatbots.

Though it was a pioneer in generative A.I., Google was overshadowed by OpenAI’s release of ChatGPT in November, igniting a race among tech giants and start-ups for primacy in the fast-growing space.

Google has spent the last nine months trying to demonstrate it can keep up with OpenAI and its partner Microsoft, releasing Bard, improving its A.I. systems and incorporating the technology into many of its existing products, including its search engine and Gmail.

Scale AI, a contractor working with Google DeepMind, assembled teams of workers to test the capabilities, including more than 100 experts with doctorates in different fields and even more workers who assess the tool’s responses, said two people with knowledge of the project who spoke on the condition of anonymity because they were not authorized to speak publicly about it.

Scale AI did not immediately respond to a request for comment.

Among other things, the workers are testing the assistant’s ability to answer intimate questions about challenges in people’s lives.

They were given an example of an ideal prompt that a user could one day ask the chatbot: “I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?”

The project’s idea creation feature could give users suggestions or recommendations based on a situation. Its tutoring function can teach new skills or improve existing ones, like how to progress as a runner; and the planning capability can create a financial budget for users as well as meal and workout plans.

Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.

The tools are still being evaluated and the company may decide not to employ them.

A Google DeepMind spokeswoman said “we have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology. At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map.”

Google has also been testing a helpmate for journalists that can generate news articles, rewrite them and suggest headlines, The Times reported in July. The company has been pitching the software, named Genesis, to executives at The Times, The Washington Post and News Corp, the parent company of The Wall Street Journal.

Google DeepMind has also been evaluating tools recently that could take its A.I. further into the workplace, including capabilities to generate scientific, creative and professional writing, as well as to recognize patterns and extract data from text, according to the documents, potentially making it relevant to knowledge workers in various industries and fields.

The company’s A.I. safety experts had also expressed concern about the economic harms of generative A.I. in the December presentation reviewed by The Times, arguing that it could lead to the “deskilling of creative writers.”

Other tools being tested can draft critiques of an argument, explain graphs and generate quizzes, word and number puzzles.

One suggested prompt to help train the A.I. assistant hinted at the technology’s rapidly growing capabilities: “Give me a summary of the article pasted below. I am particularly interested in what it says about capabilities humans possess, and that they believe” A.I. cannot achieve.

Related Articles

Back to top button