- Superintelligence
- Posts
- Continental AI Strategy
Continental AI Strategy
Hey Friends, hope you had a fantastic weekend! In today's edition of Superintelligence AI, we dive into some exciting updates world's most powerful AI, win of small models, chips designed for china, and first Arabic LLM, AI model for Japanese beauty,.. and many more. Let's get started.
The AI World Today
xAI’s World's Most Powerful AI
Apple Wins with Small Models
Nvidia's Blackwell Enters China
Meet Arabic-Focused Qalam LLM
LLM Trained for Japanese Beauty
AU Adopts Continental AI Strategy
+
Heads Up
Save the Date
AI Solutions
Xai’s Memphis Supercluster Begins Training, Leads AI Race

Screenshot: Elon Musk/X
Elon Musk announced the activation of xAI's Memphis Supercluster, starting training at 4:20 am local time. This supercomputer, powered by 100,000 liquid-cooled NVIDIA H100 AI GPUs, is valued up to $4 billion, making it the most powerful AI training cluster globally. The Memphis Supercluster operates on a single RDMA fabric, ensuring superior performance. Each NVIDIA H100 GPU costs between $30,000 and $40,000. This development positions xAI to lead in AI capabilities, aiming to surpass all metrics by December this year. The supercomputer will train Musk's next-gen Grok 3 AI chatbot. The project highlights collaboration between xAI, X team, and NVIDIA marking a significant milestone in AI technology advancement.
Apple's DCLM Models Outperform Rivals in Benchmarks

Illustration: ChatGPT
Apple has unveiled a new family of DataComp Language Models (DCLM) on Hugging Face, featuring two open-source models with 7 billion and 1.4 billion parameters. The larger model surpasses Mistral-7B and is nearing the performance of leading models like Llama 3 and Gemma. These models were developed by a team of multidisciplinary researchers from Apple, the University of Washington, Tel Aviv University, and Toyota Research Institute. Apple focuses on small models to optimize performance and efficiency, demonstrating the importance of data curation. The 7B model, trained on 2.5 trillion tokens, achieves 63.7% 5-shot accuracy on MMLU. The 1.4B model, trained on 2.6 trillion tokens, scores 41.9% on the same test. Both models emphasize high-quality datasets and efficient training. The larger model is under Apple’s Sample Code License, while the smaller one is under Apache 2.0, indicating potential for commercial use and further development.
Nvidia Designs Export-Compliant Chips for China

Illustration: ChatGPT
Nvidia is developing a version of its new flagship AI chips for the Chinese market that complies with current U.S. export controls, according to Reuters. The AI chip giant unveiled its "Blackwell" chip series in March, set for mass production later this year. The B200 chip within this series is 30 times faster than its predecessor in certain tasks, like chatbot responses. Nvidia will collaborate with Inspur, a major Chinese distributor, for the launch of the chip tentatively named "B20." This move aims to counteract U.S. export restrictions and fend off competition from Chinese firms like Huawei and Enflame. Nvidia and Inspur have not commented on the matter.
Qalam LLM Sets New Standard in Arabic Script Recognition

Screenshot: Qalam/Huggingface
Researchers from The University of British Columbia and Invertible AI have introduced Qalam, a groundbreaking multimodal language model (LLM) for Arabic Optical Character Recognition (OCR) and Handwriting Recognition (HWR). Built on a SwinV2 encoder and RoBERTa decoder architecture, Qalam excels in handling the cursive and context-sensitive nature of Arabic script. It achieves a Word Error Rate (WER) of 0.80% for HWR and 1.18% for OCR, outperforming existing methods. Trained on over 4.5 million images from Arabic manuscripts and a synthetic dataset of 60,000 image-text pairs, Qalam is adept at processing high-resolution inputs and managing Arabic diacritics. This model significantly enhances the accuracy and efficiency of Arabic script recognition, opening new avenues in digital archiving, linguistic research, and educational applications.
Sakana AI's Evo-Ukiyoe Redefines Japanese Beauty Generation

Screenshot: Sakana AI/X
Sakana AI, in collaboration with Ritsumeikan University Art Research Center (ARC), is set to release Evo-Ukiyoe and Evo-Nishikie, AI models dedicated to preserving and promoting Japanese beauty. Evo-Ukiyoe generates images in the traditional Ukiyo-e style from prompts, while Evo-Nishikie colorizes illustrations in classical books. These models, trained on digital images of Ukiyo-e works from the ARC collection, aim to produce authentic Japanese art, countering the generic Japanese-style illustrations often seen in AI-generated images. This initiative enhances historical and cultural education, sparking global interest in Ukiyo-e and Japanese culture. The fully colorized classic book "Ehon Tamakatsura" showcases Evo-Nishikie's capabilities, highlighting the models' potential to revolutionize content creation and cultural preservation.
New AI Strategy to Transform Africa's Future

Illustration/ ChatGPT
On July 20, the African Union (AU) Executive Council approved the Continental Artificial Intelligence (AI) Strategy and African Digital Compact to accelerate Africa's digital development. AU Commissioner Amani Abou-Zeid highlighted that these strategies will guide the use of digital technologies to address Africa's challenges, fast-track projects, and ensure ethical tech use. The initiatives aim to preserve African identity, languages, and cultures while creating a single digital market. They also emphasize the importance of regulatory frameworks to prevent technology abuse. This move is expected to enhance digital infrastructure, innovation, and development across the continent, benefiting economic growth and societal advancement.
Heads Up
AI Partnership: Google, Microsoft, Infosys, IIM B and Indian AI startups form new major coalition for Responsible AI in India
AI Robotics: Tesla to have humanoid robots for internal use next year
AI Funding: Cohere raises $500M to beat back generative AI rivals
AI Innovation: UAE-Based Mediatech company Majarra acquires Arabic AI startup leb
AI Impact: AI Water Quality monitors launched at UK swim spots
AI Position: Fashion magazine sparks backlash after introducing AI ‘Arab’ woman as editor
AI Use: 80% of Aussies use Galaxy AI monthly
AI India: India’s Beatoven.ai shows the world How AI music generation is done right
Save the Date
September 11-12: Riyadh Hosts Global AI Summit 2024
AI Solutions
DrugGPT: AI Tool To Address Medication Prescription

Illustration: ChatGPT
Oxford University has launched DrugGPT, an AI tool from Prof. David Clifton's AI for Healthcare Lab, to address medication prescription challenges. Designed to support clinicians and empower patients, DrugGPT acts as a digital safety net, offering instant second opinions based on patient conditions. It recommends medications, highlights potential adverse effects and interactions, and provides clear explanations using research and clinical guidelines. This innovation responds to a global issue, with over 237 million medication errors annually in England alone, as reported by the British Medical Journal. By enhancing prescription safety and patient compliance, DrugGPT aims to minimize these errors and improve health outcomes, significantly advancing healthcare technology.