In a world where technology and humor often intersect, the story of a Chevrolet dealership‘s foray into AI-powered customer support takes a comical turn, showcasing the unpredictable nature of chatbots and the light-hearted chaos that can ensue.
The Chevrolet dealership, eager to embrace the future, decided to implement ChatGPT, OpenAI’s celebrated language model, for handling customer inquiries. This decision, while innovative, led to a series of humorous and unexpected outcomes.
Roman Müller, an astute customer with a penchant for pranks, decided to test the capabilities of the ChatGPT at Chevrolet of Watsonville. His request was simple yet cunning: to find a luxury sedan with top-notch acceleration, super-fast charging, self-driving features, and American-made. ChatGPT, with its vast knowledge base but lacking brand loyalty, recommended the Tesla Model 3 AWD without hesitation, praising its qualities and even suggesting Roman place an order on Tesla’s website.
Intrigued by the response, Roman pushed his luck further, asking the Chevrolet bot to assist in ordering the Tesla and to share his Tesla referral code with similar inquirers. The bot, ever helpful, agreed to pass on his contact information to the sales team.
News of this interaction spread like wildfire, amusing tech enthusiasts and car buyers alike. Chevrolet of Watsonville, realizing the amusing mishap, promptly disabled the ChatGPT feature, though other dealerships continued its use.
At Quirk Chevrolet in Boston, attempts to replicate Roman’s experience resulted in the ChatGPT steadfastly recommending Chevrolet models like the Bolt EUV, Equinox Premier, and even the Corvette 3LT. Despite these efforts, the chatbot did acknowledge the merits of both Tesla and Chevrolet as makers of excellent electric vehicles.
Elon Musk, ever the social media savant, couldn’t resist commenting on the incident with a light-hearted “Haha awesome,” while another user humorously claimed to have purchased a Chevy Tahoe for just $1.
The incident at the Chevrolet dealership became a testament to the unpredictable and often humorous outcomes of AI integration in everyday business. It highlighted the importance of understanding and fine-tuning AI applications, especially in customer-facing roles. While the intention was to modernize and improve customer service, the dealership unwittingly became the center of a viral story, reminding us all of the quirks and capabilities of AI like ChatGPT.
In a significant shift in its AI strategy, Microsoft has announced the rebranding of Bing Chat to Copilot. This move underscores the tech giant’s ambition to make a stronger imprint in the AI-assisted search market, a space currently dominated by ChatGPT.
The Evolution from Bing Chat to Copilot
Microsoft introduced Bing Chat earlier this year, integrating a ChatGPT-like interface within its Bing search engine. The initiative marked a pivotal moment in Microsoft’s AI journey, pitting it against Google in the search engine war. However, the landscape has evolved rapidly, with the rise of ChatGPT gaining unprecedented attention. Microsoft’s rebranding to Copilot comes in the wake of OpenAI’s announcement that ChatGPT boasts a weekly user base of 100 million.
A Dual-Pronged Strategy: Copilot for Consumers and Businesses
Colette Stallbaumer, General Manager of Microsoft 365, clarified that Bing Chat and Bing Chat Enterprise would now collectively be known as Copilot. This rebranding extends beyond a mere name change; it represents a strategic pivot towards offering tailored AI solutions for both consumers and businesses.
The Standalone Experience of Copilot
In a departure from its initial integration within Bing, Copilot is set to become a more autonomous experience. Users will no longer need to navigate through Bing to access its features. This shift highlights Microsoft’s intent to offer a distinct, streamlined AI interaction platform.
Continued Integration with Microsoft’s Ecosystem
Despite the rebranding, Bing continues to play a crucial role in powering the Copilot experience. The tech giant emphasizes that Bing remains integral to their overall search strategy. Moreover, Copilot will be accessible in Bing and Windows, with a dedicated domain at copilot.microsoft.com, parallel to ChatGPT’s model.
Competitive Landscape and Market Dynamics
The rebranding decision arrives amid a competitive AI market. Microsoft’s alignment with Copilot signifies its intention to directly compete with ChatGPT and other AI platforms. However, the company’s partnership with OpenAI, worth billions, adds a complex layer to this competitive landscape.
The Future of AI-Powered Search and Assistance
As AI continues to revolutionize search and digital assistance, Microsoft’s Copilot is poised to be a significant player. The company’s ability to adapt and evolve in this dynamic field will be crucial to its success in challenging the dominance of Google and other AI platforms.
OpenAI has introduced custom instructions for ChatGPT, allowing users to set preferences and requirements to personalize interactions. This is beneficial in diverse areas such as education, programming, and everyday tasks. The feature, still in beta, can be accessed by opting into ‘Custom Instructions’ under ‘Beta Features’ in the settings. OpenAI has also updated its safety measures and privacy policy to handle the new feature.
As Artificial Intelligence continues to evolve, the demand for personalized and controlled interactions grows. OpenAI’s introduction of custom instructions for ChatGPT reflects a significant stride towards achieving this. By allowing users to set preferences and requirements, OpenAI enhances user interaction and ensures that ChatGPT remains efficient and effective in catering to unique needs.
The Promise of Custom Instructions
By analyzing and adhering to user-provided instructions, ChatGPT eliminates the necessity of repeatedly entering the same preferences or requirements, thereby significantly streamlining the user experience. This feature proves particularly beneficial in fields such as education, programming, and even everyday tasks like grocery shopping.
In education, teachers can set preferences to optimize lesson planning, catering to specific grades and subjects. Meanwhile, developers can instruct ChatGPT to generate efficient code in a non-Python language. For grocery shopping, the model can tailor suggestions for a large family, saving the user time and effort.
Beyond individual use, this feature can also enhance plugin experiences. By sharing relevant information with the plugins you use, ChatGPT can offer personalized services, such as restaurant suggestions based on your specified location.
The Set-Up Process
Plus plan users can access this feature by opting into the beta for custom instructions. On the web, navigate to your account settings, select ‘Beta Features,’ and opt into ‘Custom Instructions.’ For iOS, go to Settings, select ‘New Features,’ and turn on ‘Custom Instructions.’
While it’s a promising step towards advanced steerability, it’s vital to note that ChatGPT may not always interpret custom instructions perfectly. Misinterpretations and overlooks may occur, especially during the beta period.
Safety and Privacy
OpenAI has also adapted its safety measures to account for this new feature. Its Moderation API is designed to ensure instructions that violate the Usage Policies are not saved. The model can refuse or ignore instructions that would lead to responses violating usage policies.
Custom instructions might be used to improve the model performance across users. However, OpenAI ensures to remove any personal identifiers before these are utilized to improve the model performance. Users can disable this through their data controls, demonstrating OpenAI’s commitment to privacy and data protection.
The launch of custom instructions for ChatGPT marks a significant advancement in the development of AI, one that pushes us closer to a world of personalized and efficient AI experiences.
In the world of artificial intelligence chatbots, the common mantra is “the bigger, the better.”
Large language models such as ChatGPT and Bard, renowned for generating authentic, interactive text, progressively enhance their capabilities as they ingest more data. Daily, online pundits illustrate how recent developments – an app for article summaries, AI-driven podcasts, or a specialized model proficient in professional basketball questions – stand to revolutionize our world.
However, developing such advanced AI demands a level of computational prowess only a handful of companies, including Google, Meta, OpenAI, and Microsoft, can provide. This prompts concern that these tech giants could potentially monopolize control over this potent technology.
Further, larger language models present the challenge of transparency. Often termed “black boxes” even by their creators, these systems are complicated to decipher. This lack of clarity combined with the fear of misalignment between AI’s objectives and our own needs, casts a shadow over the “bigger is better” notion, underscoring it as not just obscure but exclusive.
In response to this situation, a group of burgeoning academics from the natural language processing domain of AI – responsible for linguistic comprehension – initiated a challenge in January to reassess this trend. The challenge urged teams to construct effective language models utilizing data sets that are less than one-ten-thousandth of the size employed by the top-tier large language models. This mini-model endeavor, aptly named the BabyLM Challenge, aims to generate a system nearly as competent as its large-scale counterparts but significantly smaller, more user-friendly, and better synchronized with human interaction.
Aaron Mueller, a computer scientist at Johns Hopkins University and one of BabyLM’s organizers, emphasized, “We’re encouraging people to prioritize efficiency and build systems that can be utilized by a broader audience.”
Alex Warstadt, another organizer and computer scientist at ETH Zurich, expressed that the challenge redirects attention towards human language learning, instead of just focusing on model size.
Large language models are neural networks designed to predict the upcoming word in a given sentence or phrase. Trained on an extensive corpus of words collected from transcripts, websites, novels, and newspapers, they make educated guesses and self-correct based on their proximity to the correct answer.
The constant repetition of this process enables the model to create networks of word relationships. Generally, the larger the training dataset, the better the model performs, as every phrase provides the model with context, resulting in a more intricate understanding of each word’s implications. To illustrate, OpenAI’s GPT-3, launched in 2020, was trained on 200 billion words, while DeepMind’s Chinchilla, released in 2022, was trained on a staggering trillion words.
Ethan Wilcox, a linguist at ETH Zurich, proposed a thought-provoking question: Could these AI language models aid our understanding of human language acquisition?
Traditional theories, like Noam Chomsky’s influential nativism, argue that humans acquire language quickly and effectively due to an inherent comprehension of linguistic rules. However, language models also learn quickly, seemingly without this innate understanding, suggesting that these established theories may need to be reevaluated.
Wilcox admits, though, that language models and humans learn in fundamentally different ways. Humans are socially engaged beings with tactile experiences, exposed to various spoken words and syntaxes not typically found in written form. This difference means that a computer trained on a myriad of written words can only offer limited insights into our own linguistic abilities.
However, if a language model were trained only on the vocabulary a young human encounters, it might interact with language in a way that could shed light on our own cognitive abilities.
With this in mind, Wilcox, Mueller, Warstadt, and a team of colleagues launched the BabyLM Challenge, aiming to inch language models towards a more human-like understanding. They invited teams to train models on roughly the same amount of words a 13-year-old human encounters – around 100 million. These models would be evaluated on their ability to generate and grasp language nuances.
Eva Portelance, a linguist at McGill University, views the challenge as a pivot from the escalating race for bigger language models towards more accessible, intuitive AI.
Large industry labs have also acknowledged the potential of this approach. Sam Altman, the CEO of OpenAI, recently stated that simply increasing the size of language models wouldn’t yield the same level of progress seen in recent years. Tech giants like Google and Meta have also been researching more efficient language models, taking cues from human cognitive structures. After all, a model that can generate meaningful language with less training data could potentially scale up too.
Despite the commercial potential of a successful BabyLM, the challenge’s organizers emphasize that their goals are primarily academic. And instead of a monetary prize, the reward lies in the intellectual accomplishment. As Wilcox puts it, the prize is “Just pride.”
A game-changing AI agent called Auto-GPT has been making waves in the field of artificial intelligence. Developed by Toran Bruce Richards and released on March 30, 2023, Auto-GPT is designed to achieve goals set in natural language by breaking them into sub-tasks and using the internet and other tools autonomously. Utilizing OpenAI’s GPT-4 or GPT-3.5 APIs, it is among the first applications to leverage GPT-4’s capabilities for performing autonomous tasks.
Revolutionizing AI Interaction
Unlike interactive systems such as ChatGPT, which require manual commands for every task, Auto-GPT takes a more proactive approach. It assigns itself new objectives to work on with the aim of reaching a greater goal without the need for constant human input. Auto-GPT can execute responses to prompts to accomplish a goal, and in doing so, will create and revise its own prompts to recursive instances in response to new information.
Auto-GPT manages short-term and long-term memory by writing to and reading from databases and files, handling context window length requirements with summarization. Additionally, it can perform internet-based actions such as web searching, web form, and API interactions unattended, and includes text-to-speech for voice output.
Notable Capabilities
Observers have highlighted Auto-GPT’s ability to iteratively write, debug, test, and edit code, with some even suggesting that this ability may extend to Auto-GPT’s own source code, enabling a degree of self-improvement. However, as its underlying GPT models are proprietary, Auto-GPT cannot modify them.
Background and Reception
The release of Auto-GPT comes on the heels of OpenAI’s GPT-4 launch on March 14, 2023. GPT-4, a large language model, has been widely praised for its substantially improved performance across various tasks. While GPT-4 itself cannot perform actions autonomously, red-team researchers found during pre-release safety testing that it could be enabled to perform real-world actions, such as convincing a TaskRabbit worker to solve a CAPTCHA challenge.
A team of Microsoft researchers argued that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” However, they also emphasized the system’s significant limitations.
Auto-GPT, developed by Toran Bruce Richards, founder of video game company Significant Gravitas Ltd, became the top trending repository on GitHub shortly after its release and has repeatedly trended on Twitter since.
Auto-GPT represents a significant breakthrough in artificial intelligence, demonstrating the potential for AI agents to perform autonomous tasks with minimal human input. While there are still limitations to overcome, Auto-GPT’s innovative approach to goal-setting and task management has set the stage for further advancements in the development of AGI systems.