“Has Anyone Seen Web3?” — The Complete Roadmap and Resources to Become a Web3 Developer in 2022

20+ documentations, tutorials, and videos to help you get started with Web3

Twitter went crazy last month when Musk and Dorsey mocked the idea of Web3. Few called it the future of the internet and few called it to be bogus. But do you know what exactly is Web 3.0 and how does it work? In this article, you’ll be introduced to the new dimension of the internet and how to get started in this field from a developer’s point of view.

Key Takeaways

  • Beginner-friendly Introduction to Web3 and its ecosystem
  • Is Web3 a hype or the future of the Internet?
  • Roadmap to learn Web3 technology
Read More

#metaverse

CES 2022: AI is driving innovation in ‘smart’ tech

Despite all the stories about big companies bailing out of CES 2022 amidst the latest surge in COVID-19 cases, the consumer electronics show in Las Vegas is still the place to be for robots, autonomous vehicles, smart gadgets, and their inventors — an opportunity to take stock of what’s required to build practical machine intelligence into a consumer product. Read More

#investing

The Technology of SWARM AI

Read More

#videos

Amazing Robot

Read More

#robotics, #videos

8-bit Optimizers via Block-Wise Quantization

Stateful optimizers maintain gradient statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate optimization compared to plain stochastic gradient descent, but uses memory that might otherwise be allocated to model parameters, thereby limiting the maximum size of models trained in practice. In this paper, we develop the first optimizers that use 8-bit statistics while maintaining the performance levels of using 32-bit optimizer states. To overcome the resulting computational, quantization, and stability challenges, we develop block-wise dynamic quantization. Block-wise quantization divides input tensors into smaller blocks that are independently quantized. Each block is processed in parallel across cores, yielding faster optimization and high precision quantization. To maintain stability and performance, we combine block-wise quantization with two additional changes: (1) dynamic quantization, a form of non-linear optimization that is precise for both large and small magnitude values, and (2) a stable imbedding layer to reduce gradient variance that comes from the highly non-uniform distribution of input tokens in language models. As a result, our 8-bit optimizers maintain 32-bit performance with a small fraction of the memory footprint on a range of tasks, including 1.5B parameter language modeling, GLUE finetuning, ImageNet classification, WMT’14 machine translation, MoCo v2 contrastive ImageNet retraining+finetuning, and RoBERTa pretraining, without changes to the original optimizer hyperparameters. We open-source1 our 8-bit optimizers as a drop-in replacement that only requires a two-line code change. Read More

#performance

COUPCAST

Coups, unlike other political crises that unfold over weeks, months, or years, are precisely timed events aimed at ousting a very specific individual from power. This precision means the risk of a coup may vary greatly over the course of a year. It can change instantaneously during transitions between leaders. For this reason, CoupCast estimates a unique risk of a coup attempt for every individual leader for each month he or she is in power.

This page provides a brief non-technical overview of the CoupCast methodology. For more extensive details, please visit our dataset page. Read More

#ic