Artificial Intelligence (AI) – Overview
Artificial Intelligence (AI) is a field of computer science that aims to develop machines capable of performing tasks that normally require human intelligence, such as problem-solving and decision-making.
Artificial Intelligence is a vast field that covers many facets often referred to as an umbrella term AI.
There are several subfields within the larger field of AI, including Machine Learning (ML), Deep Learning (DL), and Artificial General Intelligence (AGI) etc.,
Is AI, a hype or a reality?
Hype and realism are two sides of the same coin. While hype can bring excitement and anticipation, realism helps us stay grounded and focused on what’s truly important. In the case of AI, embracing both can lead to a balanced outlook that allows us to dream big while also staying grounded in reality.
AI is contributing significantly to various aspects of life, from self-driving cars to generative models like ChatGPT, which create content, like text, images, video, audio and code that are becoming more prevalent in everyday use, as we strive to create intelligent machines that can function like humans.
We are fortunate to live in a world where AI applications have become an integral part of our daily lives. We have powerful web search engines like Google and Bing, recommended systems on platforms like YouTube, Amazon, and Netflix, speech assistants like Siri, Google Assistant, and Alexa, self-driving cars from companies like Tesla and Waymo, AI art tools such as Midjourney, Open AI’s Dall-E, Open AI’s Sora, Adobe’s Firefly, and games like Chess and Go that have superhuman capabilities. In addition, AI has given us automatic language translators like Google Translate and Microsoft Translate, and it even helped to predict protein structure through AlphaFold 2.
Large companies have made significant contributions to AI algorithms that are being used in the healthcare sector. Examples include IBM Watson Oncology, Microsoft’s Hanover Project, Google DeepMind used by UK National Health Services, and Elon Musk’s Neuralink brain chip.
There are thousands of successful AI applications used to solve industry-specific problems within Astronomy, Agriculture, Health, Medical Research, Drug discovery, and Games.
It’s truly inspiring to see how AI technology has advanced and brought so many incredible innovations into our daily lives.
Can machines think?
Alan Turing proposed the Turing test in his 1950 paper as a way of addressing the problem of AI.
The Turing Test, as proposed by Alan Turing, is a method for determining whether or not a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
The field underwent a period of reduced interest in AI research, known as ‘AI winter’, followed by renewed interest decades later.
The enthusiasm and optimism surrounding AI have increased significantly since the early 1990s.
In 2012, there was a surge of interest in the sub-field of machine learning (ML) from both the research and corporate communities. This led to a substantial increase in funding and investment, resulting in the current state of AI as of 2023.
Professor Michael Wooldridge, who is the Director of Foundational AI Research and also a Turing AI World-Leading Research Fellow at The Alan Turing Institute, has expressed his belief that the original Turing test is now just a historical note.
It’s important to note that the Turing Test is not a definitive measure of intelligence or consciousness but rather a benchmark for a machine’s ability to simulate human-like responses.
The ongoing development and assessment of AI capabilities continue to evolve, and the criteria for “passing” the Turing Test may also change over time.
AI Boom
There is a sudden surge in AI development and adoption in the last few years. The recent boom in AI can be attributed to a confluence of factors that have collectively propelled the field forward.
Key Factors Contributing to the AI Boom are:
1. Advancements in Algorithms : Improved algorithms, especially in deep learning, have significantly increased AI’s capabilities.
2. Increase in Computational Power: Enhanced processing power allows for more complex models to be trained faster.
3. Availability of Big Data: The explosion of data generated by digital activities provides fuel for training AI models.
4. Investment and Research: There has been a substantial increase in funding and research in AI, leading to rapid advancements.
5. Generative AI: The rise of generative AI applications has captured public interest and demonstrated practical uses of AI.
6. Hardware Innovations: Developments in hardware, such as Graphics Processing Unit’s (GPU), have accelerated AI training and inference tasks.
Artificial Intelligence (AI) Subfields:
Artificial Intelligence (AI) encompasses a variety of subfields, each focusing on specific aspects of creating intelligent machines. Here’s a brief overview of some key subfields:
Machine Learning (ML)
ML is a subset of AI that enables machines to improve at tasks with experience. It involves algorithms that can learn from and make predictions or decisions based on data. ML is fundamental to AI because it provides the methods and principles for building systems that learn and adapt.
Deep Learning (DL)
DL is a subfield of ML based on artificial neural networks(ANN) with representation learning. It involves networks capable of learning unsupervised from data that is unstructured or unlabeled. DL has been instrumental in achieving state-of-the-art results in many AI tasks, particularly those involving large amounts of data, like image and speech recognition.
Deep Learning provides the foundation for many AI advancements, including generative AI models like LLMs (such as ChatGPT), which can generate human-like text based on context and input .
Deep learning architectures, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in tasks like image recognition, natural language processing (NLP), and speech recognition.
Other Notable Subfields:
Natural Language Processing (NLP) : Focuses on the interaction between computers and humans through natural language.
Computer Vision: Deals with how computers can gain high-level understanding from digital images or videos.
Robotics: Combines AI with mechanical engineering to create intelligent machines that can perform tasks in the real world. The companies like Boston Dynamics are pioneers in the Robotics field.
Cognitive Computing: Aims to simulate human thought processes in a computerized model.
Affective Computing: Seeks to develop systems that can recognize, interpret, process, and simulate human affects (emotions).
These subfields are interconnected, often overlapping and contributing to one another’s advancements. As AI continues to evolve, new subfields may emerge, and existing ones may gain in prominence.
Large Language Models (LLMs)
LLMs are massive neural networks trained on vast amounts of text data. They learn to predict the next word in a sentence, which enables them to understand context and generate coherent text. GPT-3.5 (Generative Pre-trained Transformer 3.5) is an example of an LLM.
As of now, there are several Large Language Models (LLMs) available, each with its own unique capabilities and applications. Here’s a list of some notable LLMs:
– GPT (Generative Pre-trained Transformer): Developed by OpenAI, the GPT series includes models like GPT-3.5 and GPT-4, which are used in various applications including ChatGPT and Microsoft Copilot.
– PaLM (Pathways Language Model) Created by Google, PaLM is known for its large-scale language model with 340 billion parameters.
– LLaMA (Large Language Model Meta AI): Developed by Meta, LLaMA is a family of open-source models designed for research purposes.
– Claude: Built by Anthropic, Claude models are designed to be steerable and less prone to producing harmful outputs.
– BLOOM: An open-source model created as a collaborative effort by multiple organizations.
– Cohere: A language model that powers various applications, offering APIs for different use cases.
– Gemini: Another model from Google, which powers some queries on Bard.
– Stable Beluga & StableLM: Developed by Stability AI, these models are open-source and cater to a variety of tasks.
These models represent just a fraction of the LLMs currently available, as the field is rapidly evolving with new models being developed regularly. For the most comprehensive and up-to-date list, you can refer to dedicated directories and resources that track the development of LLMs.
The LLMs can be fine tuned to suit the various sectors like Retail, Finance, Healthcare, and Entertainment. A real time example could be a hospital buys pre-trained LLM from Google/Meta/OpenAI, and fine tunes the LLM with its own medical-specific data and uses to improve diagnostic accuracy.
In the age of AI, we could simply use AI powered products via chatbot like ChatGPT/Copilot that connects to various models or the companies can build products with their own set of data to connect to the models.
Prompt Engineering
Prompt Engineering is a crucial aspect of interacting with AI systems, particularly Large Language Models (LLMs). It involves the strategic crafting of prompts—questions or instructions—to guide AI models to produce specific and relevant outcomes. The importance of prompt engineering lies in its ability to bridge the gap between human intent and machine understanding. As AI becomes more integrated into our daily lives, the ability to communicate effectively with AI systems through well-designed prompts will be essential for tasks ranging from gathering information to creative problem-solving.
Here’s a brief overview of its significance:
Enhances Communication: Prompt engineering ensures that AI systems understand the nuance and intent behind user queries, leading to more accurate and helpful responses.
Facilitates Better Outcomes: By refining prompts, users can direct AI to generate content that is more aligned with their expectations, whether it’s text, code, or images.
Improves User Experience: A well-engineered prompt can make interactions with AI more natural and intuitive, improving the overall user experience.
Drives AI Advancements: As AI evolves, prompt engineering plays a pivotal role in training AI to interact organically with people, pushing the boundaries of what AI can achieve.
In the future, as AI systems become more sophisticated and widespread, the role of prompt engineering will likely grow in importance, making it an essential skill for anyone working with AI.
Generative AI
Generative AI is a transformative branch of artificial intelligence that focuses on creating new, original content. It leverages advanced machine learning techniques to produce outputs that can include text, images, video, music, and even writing stories.
Generative adversarial networks (GANs) and variational autoencoders (VAEs) are popular generative AI techniques.
Generative adversarial networks (GANs)
NVIDIA Known for GauGAN and StyleGAN series, NVIDIA provides powerful tools for image generation and modification.
OpenAI, though primarily known for its work in language models, OpenAI has also explored GANs in various projects.
Variational autoencoders (VAEs)
DeepMind as a leader in AI research, DeepMind explores various applications of VAEs in complex problem-solving.
TensorFlow and PyTorch are not companies but these open-source machine learning libraries offer extensive support for building VAEs, and many companies and researchers use them for their projects.
Transformer-based Models
Originally designed for natural language processing tasks, transformer-based models have shown remarkable versatility in generative tasks, including text, image, and music generation.
OpenAI: Creator of GPT (Generative Pre-trained Transformer) models, including the latest iterations, which have set new standards in generative text models.
Google: With models like BERT and T5, Google has heavily contributed to the development of transformer models, which also have generative applications.
Stability AI: Known for creating Stable Diffusion, a state-of-the-art text-to-image model.
Autoregressive Models
Autoregressive models generate sequences of data where the prediction of the next item is conditioned on the preceding items. They are used in text, speech, and music generation.
OpenAI: OpenAI’s GPT series are examples of autoregressive models focused on generating human-like text.
Google DeepMind: With WaveNet and GPT-3 for audio and text generation, respectively, demonstrating the versatility of autoregressive models.
This overview highlights the dynamic and rapidly evolving landscape of generative models. The mentioned companies are at the forefront, pushing the boundaries of what’s possible in artificial intelligence.
Generative AI is used in a wide range of applications, such as drug discovery, chip design, material science, and creative arts. It is also used in enterprise settings for product development, enhancing customer experiences, and boosting employee productivity.
The Rise of Generative AI can be traced to the recent surge in popularity back to innovations like ChatGPT and DALL·E, which have captured public attention.
There are various Generative AI model types:
Text-to-text : takes natural language input and produce text output. For example – ChatGPT3.0, Google Bard.
Text-to-image : trained on large sets of images, and generate new images. For example – Dall-E,,Midjourney,Stable Diffusion.
Text-to-video : can generate video and edit videos. For example – MetaAI make a video, Google image video, Cog video.
Text-to-3D : Can be used to produce game assets. For example – OpenAI Shape E model.
Text-to-task: Trained to perform a specific task or action based on text input.For example – Google Bard.
Applications like personalized recommendations, chatbots, and virtual assistants rely on generative models.
There are some challenges and ethical considerations around misuse, deepfakes, and bias in generative content, which emphasizes the importance of human oversight and validation in the deployment of Generative AI systems.
The future of Generative AI as a general-purpose technology that will increasingly integrate into daily work and life. Its evolution will continue to drive innovation across various sectors.
Artificial General Intelligence (AGI)
AGI also known as strong AI or human-level AI, refers to a type of AI that can understand, learn, and apply knowledge in a way that is indistinguishable from human intelligence. AGI would be capable of performing any intellectual task that a human being can, with the ability to reason, solve problems, and make decisions across a wide range of domains.
The key differences between AGI and Generative AI are:
Scope: AGI aims for a broad, flexible intelligence like a human’s, while Generative AI is specialized in creating new content based on specific inputs.
Capabilities: AGI would possess general problem-solving skills and common sense, whereas Generative AI is designed to generate outputs that mimic the style or content of its training data.
Consciousness: AGI is often associated with the idea of machines having consciousness or self-awareness, a trait not attributed to Generative AI.
While AGI remains a theoretical and aspirational goal, Generative AI is already being used in various applications today, driving innovation in fields such as art, design, and communication.
ChatGPT
The ChatGPT chatbot, developed by OpenAI (backed by Microsoft) and based on Large Language Models (LLM), has taken the world by storm.
ChatGPT belongs to the generative pre-trained transformer (GPT) family of language models. It was trained using supervised and reinforcement learning models. Its capabilities are based on GPT-3.5 and GPT-4, it can query the internet via the Bing search engine. The training data used for ChatGPT3.5 is 500 billion words. It would take a person reading 1000 words per hour greater than 1000 years. ChatGPT3.5 has 175 billion parameters.
This software application gained over 100 million weekly users, and over 180.5 million monthly users, making it the fastest-growing consumer app in history.
The release of Open AI’s ChatGPT in 2022 led to the development of competing AI-assisted products such as Google Bard, and Microsoft Copilot.
Google Bard combines two language models: Language Model for Dialogue Applications (LaMDA) and Pathways Language Model (PaLM).
Microsoft’s Copilot chatbot is powered by the same GPT LLM as ChatGPT. This chatbot crawls the internet via the Bing search engine for information. It has over 100 million active users daily.
Grok is an intriguing AI chatbot developed by Elon Musk’s company xAI.
Hardware requirements for AI Systems
There are several key components that are essential for the efficient functioning of AI applications. Here’s a breakdown of those components:
Processor (CPU)
Role: The CPU is the brain of the computer where most calculations take place. In the context of AI, it handles general computing tasks and pre-processing data for AI models.
Requirements: For AI tasks, high-performance CPUs with multiple cores are preferred. At least 4 cores per GPU accelerator are recommended, with more cores being beneficial for workloads with significant CPU compute components¹.
Graphics Processing Unit (GPU)
Role: GPUs are critical for AI because they handle the parallel processing required for machine learning and deep learning tasks.
Requirements: Powerful GPUs with high memory bandwidth and computational capabilities are ideal. The specific GPU will depend on the complexity of the AI tasks being performed¹.
Tensor Processing Unit (TPU)
Role: TPUs are specialized hardware designed specifically for neural network machine learning. They are optimized for the operations that underpin neural network computations.
Requirements: TPUs are used for large-scale machine learning tasks and are particularly effective for both training and inference phases of deep learning models¹.
Memory (RAM)
Role: RAM is used for rapid data storage and access, which is crucial when dealing with large datasets in AI.
Requirements: The amount of RAM needed will depend on the size and complexity of the datasets being processed. More RAM allows for handling larger datasets and faster processing speeds.
Storage (Drives)
Role: Storage solutions like SSDs are used to store the vast amounts of data required for AI training and operations.
Requirements: Fast read/write speeds are crucial. Solid-state drives (SSDs) are preferred for their speed over traditional hard drives.
Power Supply Unit (PSU)
Role: The PSU powers all the components in the system.
Requirements: An efficient PSU (gold or platinum rated) that meets the combined thermal design power (TDP) of all components plus a margin is necessary to ensure stability and efficiency.
Cooling Systems
Role: Effective cooling is essential to maintain performance and prevent overheating, especially when running intensive AI tasks.
Requirements: High-quality cooling solutions, including fans and liquid cooling systems, are recommended to keep the hardware at optimal temperatures.
All of these components work together to support AI applications, from training machine learning models to processing complex algorithms in real-time. They will help with the intricate balance required to build a robust AI system.
AI Processing on Mobile Devices.
Neural Processing Unit (NPU) is a processor designed from the ground-up for accelerating AI inference at low power. It’s built to handle neural network layers comprised of scalar, vector, and tensor math followed by a non-linear activation function.
NPUs are key to unlocking on-device generative AI, providing high performance at low power. They are crucial for applications that require real-time AI processing on mobile devices.
Google have partnered with Samsung recently to integrate Gemini Nano and Gemini Pro NPUs into the Galaxy S24 smartphone lineup.
AI, soon a trillion dollars industry
The global AI market was valued at several hundred billion dollars, with projections suggesting it could reach over a trillion dollars in the coming years.
This rapid growth is fueled by advancements in technology, increasing adoption across various sectors, and significant investments in AI research and development.
Providing precise and current figures for AI investments and market sizes for each country is challenging due to the rapidly evolving nature of the industry and variances in available data.
However, here is some general insights and figures that were available up to early 2023 to give you an idea of the scale and commitment of these countries to AI development. Note that these figures can quickly become outdated, and it’s advisable to consult the latest reports for the most current data.
1. United States: The U.S. government and private sector investments in AI have been substantial, with billions of dollars annually going into research and development. For instance, U.S. private AI investment was reported to be over $40 billion in 2021.
US, often considered at the forefront of AI technology, with a strong startup ecosystem, leading AI companies (like Google, Microsoft, and IBM), and world-class universities.
2. China: China’s government planned to create a $150 billion AI industry by 2030. Exact annual figures are hard to pinpoint due to the broad and integrated approach to AI investment across various sectors, and major companies like Alibaba and Baidu pushing AI boundaries.
3. United Kingdom: The UK government announced a £1 billion AI sector deal in 2018, with hundreds of millions in funding coming from the private sector and government to boost the UK’s AI capabilities.
The UK has a robust AI strategy, leading AI startups, and strong academic contributions, especially in AI ethics and safety.
4. Germany: Germany announced an AI strategy in 2018, committing to spend €3 billion on AI research and development by 2025 to strengthen its position in AI.
Germany is a leader in Europe, focusing on AI in manufacturing and automotive industries, backed by government strategies to foster AI development and application.
5. Canada: Canada has been a pioneer in AI research, investing significantly through initiatives like the Pan-Canadian Artificial Intelligence Strategy, which initially received C$125 million.
Home to significant AI research hubs, particularly in deep learning, and a supportive government policy for AI development. Toronto, Montreal, and Edmonton are key centers.
6. France: France unveiled a plan in 2018 to invest €1.5 billion into AI research and development through 2022 to make France a leader in AI technology.
With a national AI strategy aimed at making France a leader in AI research, it hosts leading AI research institutes and has made substantial investments in AI development.
7. South Korea: South Korea announced plans to spend 1 trillion won (approximately $840 million) on AI by 2020 as part of a broader strategy to nurture the AI industry.
Investing heavily in AI, with a focus on manufacturing, robotics, and semiconductors, supported by ambitious government initiatives.
8. Japan: Japan’s government has been supporting AI through various initiatives, including a ¥10 billion (about $90 million) investment in AI research and development.
Concentrates on AI in robotics and has a strong technological base, with the government launching comprehensive AI strategies focusing on societal and economic benefits.
9. India: India has launched multiple initiatives to promote AI development, including the National Program on AI, with the government setting aside billions of rupees for digital infrastructure.
Rapidly emerging as a significant player in the AI field, with a focus on education, healthcare, and agriculture, backed by government initiatives and a vast pool of IT talent.
10. Israel: Israel is known for its high per capita rates of venture capital investment in AI, with the AI sector raising over $2 billion in 2020.
Known for its vibrant startup ecosystem and innovation in cybersecurity, medical technology, and autonomous vehicles, with significant investment in AI technologies.
These figures represent only a portion of each country’s total investment in AI, as private investments, venture capital, and specific sector allocations can significantly increase the overall expenditure. Moreover, the landscape of AI investment is dynamic, with new programs, funds, and strategies continually emerging. Recently, Saudi Arabia announced its plans to push $40 billion into AI, becoming the largest player in the hot market.
(OpenAI’s ChatGPT3.5 and Microsoft’s Co-pilot were used to write the above article by giving various prompts. The data used by ChatGPT was roughly two years old (the data was as of Jan 2022) ).
కౌండిన్య – 21/03/2023
Turing test:
Source: Conversation with Bing, 16/03/2024
(1) Can AI Be More Human than Us? – GPT-4 Passes Turing Test. https://www.msn.com/en-gb/money/technology/can-ai-be-more-human-than-us-gpt-4-passes-turing-test/ar-BB1jtRQX.
(2) ChatGPT broke the Turing test — the race is on for new ways to assess AI. https://www.nature.com/articles/d41586-023-02361-7.
(3) Google’s AI passed the Turing test – The Washington Post. https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/.
(4) The Turing test: AI still hasn’t passed the “imitation game”. https://bigthink.com/the-future/turing-test-imitation-game/.
(5) AI is closer than ever to passing the Turing test for ‘intelligence …. https://theconversation.com/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does-214721.
(6) Getty Images. https://www.gettyimages.com/detail/illustration/robot-and-scientist-facing-turing-test-royalty-free-illustration/1207737571.
AI Boom:
Source: Conversation with Bing, 16/03/2024
(1) 2023: the year of the AI boom | The Week. https://theweek.com/tech/2023-ai-boom.
(2) AI boom – Wikipedia. https://en.wikipedia.org/wiki/AI_boom.
(3) What Really Caused the AI Boom – Medium. https://medium.com/fetch-ai/what-really-caused-the-ai-boom-fd6aa373b3f7.
(4) Why the AI revolution now? Because of 6 key factors.. https://becominghuman.ai/why-the-ai-revolution-now-because-of-6-key-factors-7ee92e482d2.
(5) “Ai Boom: The Explosive Rise of Artificial Intelligence and its Impact. https://langlabs.io/ai-boom-the-explosive-rise-of-artificial-intelligence-and-its-impact-on-our-lives/.
LLMs:
Source: Conversation with Bing, 16/03/2024
(1) Large language model – Wikipedia. https://en.wikipedia.org/wiki/Large_language_model.
(2) The best large language models (LLMs) in 2024 – Zapier. https://zapier.com/blog/best-llm/.
(3) Large Language Models: Complete Guide in 2024 – AIMultiple. https://research.aimultiple.com/large-language-models/.
(4) All Large Language Models Directory – All LLMs. https://llmmodels.org/.
(5) The Large Language Model (LLM) Index | Sapling. https://sapling.ai/llm/index.
(6) undefined. https://twitter.com/johnrushx.
Hardware for AI:
Source: Conversation with Bing, 16/03/2024
(1) Hardware Recommendations for Machine Learning / AI | Puget Systems. https://www.pugetsystems.com/solutions/scientific-computing-workstations/machine-learning-ai/hardware-recommendations/.
(2) Hardware Requirements for Artificial Intelligence – Medium. https://becominghuman.ai/hardware-requirements-for-artificial-intelligence-653335df899f.
(3) AI hardware – What they are and why they matter in 2023 [Updated]. https://roboticsbiz.com/ai-hardware-what-they-are-and-why-they-matter-in-2020/.
(4) Artificial Intelligence Hardware – What is Required to run AI?. https://redresscompliance.com/artificial-intelligence-hardware-what-is-required-to-run-ai/.
Generative AI:
Source: Conversation with Bing, 16/03/2024
(1) Generative AI: What Is It, Tools, Models, Applications and Use Cases. https://www.gartner.com/en/topics/generative-ai.
(2) Explained: Generative AI | MIT News | Massachusetts Institute of Technology. https://news.mit.edu/2023/explained-generative-ai-1109.
(3) Explained: Generative AI | MIT for a Better World. https://betterworld.mit.edu/explained-generative-ai/.
(4) What Is Generative AI? – IEEE Spectrum. https://spectrum.ieee.org/what-is-generative-ai.
(5) Generative AI Defined: How It Works, Benefits and Dangers – TechRepublic. https://www.techrepublic.com/article/what-is-generative-ai/.
(6) Generative AI in a nutshell by Henrik Kniberg https://youtu.be/2IK3DFHRFfw?si=DOhQ-qfGaNU7MOce
Artificial general intelligence:
Source: Conversation with Bing, 16/03/2024
(1) Artificial general intelligence – Wikipedia. https://en.wikipedia.org/wiki/Artificial_general_intelligence.
(2) What Is General Artificial Intelligence (AI)? Definition, Challenges …. https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-general-ai/.
(3) Generative artificial intelligence – Wikipedia. https://en.wikipedia.org/wiki/Generative_artificial_intelligence.
(4) Generative AI: What Is It, Tools, Models, Applications and Use Cases. https://www.gartner.com/en/topics/generative-ai.
(5) Generative artificial intelligence Definition & Meaning – Merriam-Webster. https://www.merriam-webster.com/dictionary/generative%20artificial%20intelligence.
(6) What is generative AI? | IBM Research Blog. https://research.ibm.com/blog/what-is-generative-AI.
(7) What Is Artificial General Intelligence? Definition and Examples. https://www.coursera.org/articles/what-is-artificial-general-intelligence.
(8) Artificial intelligence – Wikipedia. https://en.wikipedia.org/wiki/Artificial_intelligence.
Prompt Engineering:
Source: Conversation with Bing, 16/03/2024
(1) What is Prompt Engineering? A Detailed Guide For 2024. https://www.datacamp.com/blog/what-is-prompt-engineering-the-future-of-ai-communication.
(2) Prompt Design and Engineering: Introduction and Advanced Methods. https://arxiv.org/html/2401.14423v3.
(3) What is Prompt Engineering?. https://promptengineering.org/what-is-prompt-engineering/.
(4) Prompt Engineering: Enhancing AI-Human Communication. https://www.ediiie.com/blog/prompt-engineering-enhancing-ai-human-communication/.
(5) What Is Prompt Engineering? | IBM. https://www.ibm.com/topics/prompt-engineering.
TPU, GPU, and NPUs:
Source: Conversation with Bing, 16/03/2024
(1) TPU vs GPU in AI: A Comprehensive Guide to Their Roles and Impact on …. https://www.wevolver.com/article/tpu-vs-gpu-in-ai-a-comprehensive-guide-to-their-roles-and-impact-on-artificial-intelligence.
(2) What is an NPU? And why is it key to unlocking on-device generative AI …. https://www.qualcomm.com/news/onq/2024/02/what-is-an-npu-and-why-is-it-key-to-unlocking-on-device-generative-ai.
(3) What is an NPU and how does it help with AI? – chillblast.com. https://www.chillblast.com/blog/what-is-an-npu-and-how-does-it-help-with-ai.
(4) AI Chips: NPU vs. TPU – Bizety: Research & Consulting. https://www.bizety.com/2023/01/03/ai-chips-npu-vs-tpu/.
(5) How NPUs are reshaping smartphone AI | TechGig. https://content.techgig.com/technology/how-npus-are-reshaping-smartphone-ai/articleshow/105518652.cms.