GitHub previews Copilot Workspace, an AI developer environment to turn ideas into software

GitHub has revealed Copilot Workspace, its AI-native developer environment. Using natural language, developers can brainstorm, plan, build, test and run code faster and easier than before. First teased in 2023 at its user conference, GitHub Copilot Workspace is now available in technical preview and interested developers can sign up for the waitlist. — Read More

#devops

There’s An AI For That (TAAFT)

“There’s An AI For That” is a leading AI aggregator offering a database of over 12400 AIs available for over 15000 tasks. The platform provides remarkable inventory of cutting-edge AI Solutions for almost every need. — Read More

#devops

OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework

The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. For example, with a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy compared to OLMo while requiring 2× fewer pre-training tokens. Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. We also release code to convert models to MLX library for inference and fine-tuning on Apple devices. This comprehensive release aims to empower and strengthen the open research community, paving the way for future open research endeavors. Our source code along with pre-trained model weights and training recipes is available at \url{this https URL}. Additionally, \model models can be found on HuggingFace at: \url{this https URL}. — Read More

#devops, #nlp

How Meta is paving the way for synthetic social networks

On Thursday, the AI hype train rolled through Meta’s family of apps. The company’s Meta AI assistant, a ChatGPT-like bot that can answer a wide range of questions, is beginning to roll out broadly across Facebook, Messenger, Instagram and WhatsApp.

Powering the bot is Llama 3, the latest and most capable version of Meta’s large language model. As with its predecessors — and in contrast to models from OpenAI, Google, and Anthropic — Llama 3 is open source. Today Meta made it available in two sizes: one with 8 billion parameters, and one with 70 billion parameters. (Parameters are the variables inside a large language model; in general, the more parameters a model contains, the smarter and more sophisticated its output.) — Read More

#big7, #devops

Meta confirms that its Llama 3 open source LLM is coming in the next month

At an event in London on Tuesday, Meta confirmed that it plans an initial release of Llama 3 — the next generation of its large language model used to power generative AI assistants — within the next month.

This confirms a report published on Monday by The Information that Meta was getting close to launch.

“Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3,” said Nick Clegg, Meta’s president of global affairs.  — Read More

#nlp, #devops

Databricks launches DBRX, challenging Big Tech in the open source AI race

Databricks, a fast-growing enterprise software company, announced today the release of DBRX, a new open source artificial intelligence model that the company claims sets a new standard for open source AI efficiency and performance.

The model, which contains 132 billion parameters, outperforms leading open source alternatives like Llama 2-70B and Mixtral on key benchmarks measuring language understanding, programming ability, and math skills. — Read More

#devops

Open Interpreter: An Interesting AI Tool to Locally Run ChatGPT-Like Code Interpreter

After Auto-GPT and Code Interpreter API, a new open-source project is making waves in the AI community. The project is named Open Interpreter, and it’s been developed by Killian Lucas and a team of open-source contributors. It combines ChatGPT plugin functionalities, Code Interpreter, and something like Windows Copilot to make AI a ubiquitous solution on any platform. You can use Open Interpreter to do anything you can think of. You can interact with the system at the OS level, files, folders, programs, internet, basically everything right from a friendly Terminal interface. So if you are interested, learn how to set up and use Open Interpreter locally on your PC. — Read More

Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter after installing.

This provides a natural-language interface to your computer’s general-purpose capabilities:

Create and edit photos, videos, PDFs, etc.
Control a Chrome browser to perform research
Plot, clean, and analyze large datasets
…etc.

GitHub

The 01 Project is building an open-source ecosystem for AI devices.

Our flagship operating system can power conversational devices like the Rabbit R1, Humane Pin, or Star Trek computer.

We intend to become the GNU/Linux of this space by staying open, modular, and free.

GitHub

#devops

Introducing Devin, the first AI software engineer

Read More

#devops

Elon Musk says xAI will open source Grok this week

Elon Musk’s AI startup xAI will open source Grok, its chatbot rivaling ChatGPT, this week, the entrepreneur said, days after suing OpenAI and complaining that the Microsoft-backed startup had deviated from its open source roots.

xAI released Grok last year, arming it with features including access to “real-time” information and views undeterred by “politically correct” norms. The service is available to customers paying for X’s $16 monthly subscription. — Read More

GitHub

#devops

Gemma: Introducing new state-of-the-art open models

At Google, we believe in making AI helpful for everyone. We have a long history of contributing innovations to the open community, such as with TransformersTensorFlowBERTT5JAXAlphaFold, and AlphaCode. Today, we’re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly.

Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.” Accompanying our model weights, we’re also releasing tools to support developer innovation, foster collaboration, and guide responsible use of Gemma models. — Read More

#devops