If OpenAI’s new model can solve grade-school math, it could pave the way for more powerful systems.
Ever since last week’s dramatic events at OpenAI, the rumor mill has been in overdrive about why the company’s chief scientific officer, Ilya Sutskever, and its board decided to oust CEO Sam Altman.
While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company’s quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*. — Read More
Monthly Archives: December 2023
Decoding LLMs: Creating Transformer Encoders and Multi-Head Attention Layers in Python from Scratch
Today, Computational Natural Language Processing (NLP) is a rapidly evolving endeavour in which the power of computation meets linguistics. The linguistic side of it is mainly attributed to the theory of Distributive Semantics by John Rupert Firth. He once said the following:
“You shall know a word by the company it keeps”
So, the semantic representation of a word is determined by the context in which it is being used. It is precisely in attendance to this assumption that the paper “Attention is all you need” by Ashish Vaswani et. al. [1] assumes its groundbreaking relevance. It set the transformer architecture as the core of many of the rapidly growing tools like BERT, GPT4, Llama, etc.
In this article, we examine the key mathematical operations at the heart of the encoder segment in the transformer architecture. — Read More
AI and Mass Spying
Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.
Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we’re doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there’s no reasonable way for us to opt out of it.
Spying is another matter. … [But] AI is about to change that. — Read More
Microsoft Copilot for Windows 11 Gets GPT-4 Turbo and Dall-E 3
Copilot, the AI assistant baked into Windows 11, is getting some enhancements for more robust text and image generation, Microsoft said in a press release on Tuesday.
GPT-4 Turbo, the latest AI model by OpenAI, creators of ChatGPT, will be coming to Windows 11 in the coming weeks. Along with GPT-4 Turbo, Dall-E 3, a text-to-image generator also made by OpenAI, will be making its way to Microsoft’s operating system. Both of these new models will allow for smarter and more robust text and image generation with fewer errors. — Read More
Meta-IBM alliance promotes ‘open’ approach to AI development
The 50-member AI Alliance aims to push for responsible AI. Notably, Google, Microsoft, and OpenAI are not involved.
Artificial intelligence is one of the technologies that’s seen the most growth this year, but as a certain famous arachnid knows, with great power comes great responsibility. As AI continues to grow, different sectors, organizations, and companies are calling for stronger regulations and transparency regarding the development and use of AI. Meta and IBM are now allied in this cause. — Read More
Mistral 7B is 187x cheaper compared to GPT-4
Mistral 7B is a transformer model designed for fast inference and handling longer sequences. It achieves this by utilizing grouped-query attention and sliding-window attention. Group query attention combines multi-query and multi-head attention to balance output quality and speed. Sliding-window attention extends context length by looking beyond the window size. Mistral 7B offers an 8,000-token context length, delivering low latency, high throughput, and strong performance in comparison to larger models. It also has low memory requirements at a 7B model size. This model is freely available under the permissive Apache 2.0 license without usage restrictions. — Read More
Unlocking new AI translation capabilities with a suite of publicly available models
Seamless merges the quality and multilinguality of SeamlessM4T v2, the low latency of SeamlessStreaming and the expression preservation of SeamlessExpressive into one unified system. It’s the first streaming translation model to maintain both vocal style and prosody, which can be particularly challenging in streaming, where the system only has access to partial input. — Read More
Read the Paper
[1hr Talk] Intro to Large Language Models
Orca 2: Teaching Small Language Models How to Reason
Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval. In Orca 2, we continue exploring how improved training signals can enhance smaller LMs’ reasoning abilities. Research on training small LMs has often relied on imitation learning to replicate the output of more capable models. We contend that excessive emphasis on imitation may restrict the potential of smaller models. We seek to teach small LMs to employ different solution strategies for different tasks, potentially different from the one used by the larger model. For example, while larger models might provide a direct answer to a complex task, smaller models may not have the same capacity. In Orca 2, we teach the model various reasoning techniques (step-by-step, recall then generate, recall-reason-generate, direct answer, etc.). More crucially, we aim to help the model learn to determine the most effective solution strategy for each task. We evaluate Orca 2 using a comprehensive set of 15 diverse benchmarks (corresponding to approximately 100 tasks and over 36,000 unique prompts). Orca 2 significantly surpasses models of similar size and attains performance levels similar or better to those of models 5-10x larger, as assessed on complex tasks that test advanced reasoning abilities in zero-shot settings. make Orca 2 weights publicly available at this http URL to support research on the development, evaluation, and alignment of smaller LMs. — Read More