Spread Your Wings: Falcon 180B is here

Today, we’re excited to welcome TII’s Falcon 180B to HuggingFace! Falcon 180B sets a new state-of-the-art for open models. It is the largest openly available language model, with 180 billion parameters, and was trained on a massive 3.5 trillion tokens using TII’s RefinedWeb dataset. This represents the longest single-epoch pretraining for an open model.

You can find the model on the Hugging Face Hub (base and chat model) and interact with the model on the Falcon Chat Demo Space.

In terms of capabilities, Falcon 180B achieves state-of-the-art results across natural language tasks. It tops the leaderboard for (pre-trained) open-access models and rivals proprietary models like PaLM-2. — Read More

#chatbots, #devops, #nlp

Pentagon unveils ‘Replicator’ drone program to compete with China

The Pentagon committed on Monday to fielding thousands of attritable, autonomous systems across multiple domains within the next two years as part of a new initiative to better compete with China.

The program, dubbed Replicator, was announced by Deputy Defense Secretary Kathleen Hicks, speaking at the National Defense Industrial Association’s Emerging Technologies conference here. — Read More

#china-vs-us, #dod

ImageBind: One Embedding Space To Bind Them All

We present ImageBind, an approach to learn a joint embedding across six different modalities – images, text, audio, depth, thermal, and IMU data. We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. ImageBind can leverage recent large scale vision-language models, and extends their zero-shot capabilities to new modalities just by using their natural pairing with images. It enables novel emergent applications ‘out-of-the-box’ including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that ImageBind serves as a new way to evaluate vision models for visual and non-visual tasks. — Read More

#multi-modal

Meet Lisa, OTV And Odisha’s First AI News Anchor Set To Revolutionize TV Broadcasting & Journalism

Read More

#videos