In a recent conversation at a Law Society event, a judge from the UK’s Court of Appeal spilled the beans about his use of ChatGPT, an AI language model, to assist in writing a ruling. Lord Justice Birss admitted that he had directly copied and pasted a paragraph generated by ChatGPT into the court documents. According to the UK’s Telegraph, Birss expressed his belief that language models like ChatGPT have “great potential” for summarizing information, and he confirmed that he had personally used it.
This revelation by Birss is significant because it marks the first reported instance of a British judge openly acknowledging the use of generative AI software in their legal work. However, the use of ChatGPT is not without controversy, as it has been known to make errors. In the US, two lawyers faced heavy criticism when it was discovered that they had used the chatbot to defend a client, which resulted in false information being presented in court.
Birss took personal responsibility for the content he incorporated into his judgment and clarified that he was not attempting to shift the blame onto someone else. In his view, ChatGPT merely performed a task that he was already familiar with and provided an acceptable answer. Although some may have concerns about the use of AI software in legal proceedings, Birss emphasized that it was a tool he found useful.
Moving on to the world of venture capital, it seems that AI-centric chip startups are encountering difficulties in securing funds from VC firms, particularly those aiming to compete with popular chip manufacturer Nvidia. According to PitchBook data reported by Reuters, these startups collectively raised $881.4 million through August this year, which represents a significant decline of over 80% compared to the funding raised during the same period last year.
The decrease in funding for AI chip startups suggests a reduction in VC investments and deal-making, as only four startups have received financial backing so far this year, in contrast to the 23 funded in 2022. Hardware startups, in general, pose greater risks due to the lengthy design and construction process involved. Established chip manufacturers have a number of advantages, and Nvidia’s dominant position in the market has made it even harder for newcomers to break in.
In a surprising move, Coca-Cola has introduced a limited edition beverage called Coca-Cola Y3000, which boasts a flavor profile created with the help of AI. As part of their “Creations” series of limited edition varieties, the soda giant designed the silver can with pink, blue, and purple bubbles using text-to-image tools. The can also boldly states that the drink was “co-created with artificial intelligence.”
According to a spokesperson for Coca-Cola, machine learning was used to develop the flavor profile by collecting data on the tastes people associate with the future. The company then employed software to generate different flavor pairings. Interestingly, the taste of Coca-Cola Y3000 is said to resemble regular Coke but with a twist. Coca-Cola is notorious for keeping its recipes secret, and while they won’t disclose the exact ingredients, they confirmed that every limited edition variety contains approximately 85-90% Coke with a 10-15% unique element.
Turning to journalism, it appears that even The New York Times is embracing the use of AI tools in its newsroom. The renowned publication is currently recruiting a senior editor who will be responsible for integrating generative AI tools into their journalistic practices. The ad for the position states that the editor will lead efforts to utilize these tools for both reader-facing purposes and internal newsroom operations.
However, the implementation of generative AI in newsrooms has encountered its share of controversies. Early adopters like Red Ventures’ CNET and G/O Media’s Gizmodo have experienced errors even with human oversight. Mistakes generated by AI can be difficult to detect since the text is often grammatically correct and persuasive. To address this challenge, the senior editor hired by The New York Times will collaborate with the Standards department to establish guidelines on the responsible use of AI in journalism, taking into account the evolving nature of the technology and its associated risks.
The fact that a reputable news outlet like The New York Times is actively embracing generative AI technology may encourage other organizations to follow suit, ensuring they don’t fall behind in this ever-evolving field.