AutoGPT — LangChain — Deep Lake — MetaGPT: Building the Ultimate LLM App
Table of contents
- About ProjectPro
- Why LLMs?
- What are Large Language Models (LLMs)?
- Some Key Properties of LLMs:
- Modern LLMs as of August 2023
- Introducing LangChain
- Key features of LangChain:
- Applications of LangChain
- 1. Conversational AI and Chatbots
- 2. Intelligent Personal Assistants
- 3. Automated Content Creation
- 4. Data Analysis and Reporting
- 5. Question Answering and Information Retrieval
- 6. Semantic Search and Recommendation Engines
- 7. Automated Customer Support
- 8. Automated Game Design
- Why LangChain is a Watershed Moment for LLMs
- The Power of LangChain + AutoGPT
- Enabling Multimodal Abilities
- Vector Databases
- Deep Lake
- The Need for Specialized Storage
- Serverless Architecture for Cloud Scale
- Cloud-Native Design
- Applications of LangChain, AutoGPT, and Deep Lake for Corporates on the Cloud
- 1. Global Communication
- 3. Customer Support:
- 4. Data Analysis and Insights:
- 5. Automated Workflow:
- 6. Training and Development:
- 7. Legal and Compliance:
- 8. Supply Chain Management:
- 9. Cybersecurity:
- MetaGPT
- Potential Applications[1]
- Multimodal Applications
- LLMs of the Future[1]
- References
All images, unless stated otherwise, are generated by Bing Image Creator from this link: https://www.bing.com/create
About ProjectPro
ProjectPro is a pioneer in creating Big Data / Machine Learning/ Artificial Intelligence projects deployed to the cloud (Google, Amazon, Microsoft) for project templates. Our mission is to create project templates that reduce the cost of doing a BigData project from 500,000 USD to less than 1000 USD. Whatever you need — Sentiment Analysis, Churn Prediction, Time Series Data Processing, Regression and Classification projects — we have it all, fully built, implemented in the Cloud Computing Systems. Fill in your user — credentials and filenames and paths and access/authentication tokens and you have a Big Data AI ML Cloud project completely functional.
Why LLMs?
The Transformer releases of neural networks are the future of Artificial Intelligence. As this is written in an online reference, neural networks are universal function approximators but Transformers and universal system approximators. This makes them viable for every task that can be computerized today. As an executive and a development team professional, it is not just your part-time goal but a full-time engaging project to learn about as much of Transformers and Large Language Models as possible. Why? Because it is going to change the entire AI landscape. There are even rumors that GPT-5 will be AGI-ready — or, as intelligent or even more intelligent than human beings in the future.
Complete Project Solutions
LLMs are changing the way we deal with data. What was once the domain expertise of PyTorch and TensorFlow experts is now open to anybody thanks to the simplicity of the natural language interface. Technology is now more democratized now than ever. However, end-to-end projects still cost 500,00 USD, which is where ProjectPro will be the best possible choice because end-to-end projects are available, with training videos and tutorials in case your MLOps team needs to get up to speed on the latest technology. Pretty cool, right? And saving 499,000 USD is no joke! Don’t believe us? Head to ProjectPro.io to know more!
What are Large Language Models (LLMs)?
LLMs are a class of artificial intelligence systems that are trained on massive text datasets to generate human-like text. They can understand and generate natural language with high coherence[1].
Some Key Properties of LLMs:
▪ The adoption of a transformer-based neural network architecture enables these models to effectively capture and represent long-range dependencies within textual data, facilitating improved understanding and analysis.
▪ By employing the technique of self-supervised learning using unlabeled textual data, it becomes possible for these models to continually enhance their performance and capabilities as they have access to a larger corpus of text for learning.
▪ Furthermore, these models demonstrate remarkable few-shot learning capabilities, showcasing their ability to achieve impressive performance even with limited examples when confronted with new tasks or problems.
▪ Moreover, these models exhibit the remarkable quality of in-context learning, allowing them to adapt and adjust their capabilities based on the specific context or prompt they are provided with. This adaptability enhances their overall effectiveness and makes them highly versatile.
Modern LLMs as of August 2023
Here is a list of some of the most popular and powerful large language models (LLMs) that exist today:
[1] https://www.analyticsvidhya.com/blog/2023/03/an-introduction-to-large-language-models-llms/
This table covers some of the largest and most capable LLMs as of mid-2023. Key highlights:
Parameters ranging from 4B to over 700B, with most models over 100B parameters
▪ Training data in the hundreds of GBs to 1TB+ scraped from the internet
▪ Leading capabilities in conversational AI, text generation, summarization, and general NLP
▪ Dominated by big tech companies like Anthropic, Google, and NVIDIA/Microsoft
▪ Rapid innovation continuing, with LLMs likely to hit 1T+ parameters soon.
This race is not going to stop any time soon. We all know that we may be heralding the destruction of humanity, as AI-supremo Geoffrey Hinton foresees, but the best AI brings the best profit, and hence no company will move away from rapid AI and ML innovation. Sad, but true.
So how do we work with LLMs? The answer is simple — LangChain!
Introducing LangChain
LangChain is a new framework created by Anthropic to unlock the full potential of LLMs.
Key features of LangChain:
▪ Modular and extensible architecture: Multiple LLMs and other AI systems can be chained together.
▪ Integration with knowledge bases: LLMs can now utilize facts, relationships, and rules.
▪ Interaction with real-time data sources: Enables dynamic generation powered by latest information.
▪ Stateful processing: Context, prompts, and results are tracked to enable continuity and memory.
▪ Orchestration of model inference: Optimizes running of complex model pipelines.
LangChain represents a major breakthrough in leveraging LLMs to create AI systems that can interact with the real world. Here are 10 compelling use cases that demonstrate its vast potential:
Applications of LangChain
1. Conversational AI and Chatbots
LangChain’s conversational abilities make it ideal for building chatbots that can engage in natural, multi-turn conversations with users.
For example — power a customer support chatbot that understands context, asks clarifying questions, looks up user account details, and provides solutions tailored to the user’s specific issue.
2. Intelligent Personal Assistants
LangChain can be used to create smart assistants like Siri or Alexa. Assistants built with LangChain can understand commands, remember user preferences, integrate with external APIs/data sources, and complete complex tasks.
For example — booking a doctor’s appointment based on the user’s availability, insurance, and medical history.
3. Automated Content Creation
LangChain’s generative capabilities make it well-suited for automated content creation. It can generate high-quality content tailored to specific topics, formats, and styles.
For instance — writing unique blog posts based on a few keywords and integrate current data points into a financial report template.
4. Data Analysis and Reporting
LangChain can analyze large datasets and generate data-driven narratives and visualizations. This makes it useful for business intelligence and data analytics applications.
For example — generating a quarterly sales report by pulling data from a company’s CRM, analyzing trends, and creating charts/summaries.
5. Question Answering and Information Retrieval
LangChain excels at understanding natural language queries and retrieving accurate answers from large knowledge bases.
It could power a legal research tool that interprets questions posed in plain English, searches legal databases, and returns the most relevant legislation, case law, and articles.
6. Semantic Search and Recommendation Engines
LangChain can understand the underlying meaning and context of searches to provide relevant recommendations and results.
For instance, building user profiles based on browsing history and suggest products that match their preferences and constraints.
7. Automated Customer Support
LangChain is well-suited for automated customer service applications. It can understand support tickets, diagnose issues, suggest solutions, escalate complex tickets, and respond to customers.
For example, populating a knowledge base with FAQs and troubleshooting guides to resolve common customer queries.
8. Automated Game Design
LangChain’s creativity makes it well-suited for designing game elements like characters, plots, puzzles, and dialogues.
For example, it could generate unique game levels each time by analyzing player behavior and crafting new challenges.
Why LangChain is a Watershed Moment for LLMs
LangChain represents a major evolution in LLMs for several reasons:
▪ It allows LLMs to interact with external data sources, overcoming the limitation of models like GPT-3 that can only generate text based on the prompt.
▪ Its modular architecture makes it easy to chain together different models and data sources, unlocking new capabilities.
▪ LangChain offers a full-stack solution for developing LLM-powered applications with features like prompt management and versioning.
▪ It enables two-way communication between the LLM and external systems, allowing real-time integration.
▪ LangChain simplifies access and scales model inference, allowing developers to focus on building applications instead of infrastructure.
Overall, LangChain moves LLMs from passive text generators to active participants that can converse, reason, and interface with the real world. This paves the way for the next generation of utility AI applications. This, my dear executive/engineer, is the road to the AI singularity!
The Power of LangChain + AutoGPT
AutoGPT is Anthropic’s proprietary LLM that achieves state-of-the-art performance on natural language tasks.
Combining AutoGPT with LangChain unlocks new possibilities:
▪ AutoGPT generates remarkably human-like text, which makes applications built with it more natural and intuitive.
▪ LangChain allows AutoGPT to dynamically interact with external data to improve its responses.
▪ Together, they can handle complex, multi-step conversations and tasks.
▪ AutoGPT can be continuously trained on new data using self-supervised learning to expand its knowledge.
▪ LangChain enables seamless integration between AutoGPT and real-world systems.
Using LangChain to combine AutoGPT with datasets, APIs, rules, and workflows results in an AI assistant that is incredibly capable, responsive, and lifelike.
Enabling Multimodal Abilities
LangChain provides the foundations for multimodal AI systems that can process and generate data beyond just text.
Key features like modular architecture, external data integration, and exchange of structured data enable LangChain models to handle images, audio, video, and more.
For instance, a LangChain-based assistant could have a vision module to analyze images and an audio module to interpret speech and generate verbal responses.
Multimodal capabilities significantly expand the scope of what AI assistants can perceive and express to become more well-rounded and useful companions.
So now you know why LLMs, LangChain and AutoGPT are a new era in AI application building, and companies must rush to find experts who could deal with them proficiently and professionally. Rest assured that ProjectPro will have several LangChain application projects almost immediately in the future. But have you ever wondered about what is the background technology behind these wonderful innovations?
Vector Databases
Vector databases, also known as vector search engines or similarity search engines, are a type of database designed to handle high-dimensional vector data. They are a crucial component in many machine learning and artificial intelligence applications, as they allow for efficient storage, retrieval, and similarity search of vector data.
Vectors are mathematical objects that can represent a wide range of data types, including text, images, audio, and video. In machine learning, data is often transformed into high-dimensional vectors using techniques like embedding or encoding. For example, a piece of text can be transformed into a vector using word embedding techniques like Word2Vec or BERT, and an image can be transformed into a vector using convolutional neural networks.
Once data is transformed into vectors, it can be stored in a vector database. The key feature of a vector database is its ability to perform similarity search. This means that given a query vector, the database can efficiently find the most similar vectors in the database. This is done using distance metrics like Euclidean distance or cosine similarity. This can be used for any type of data, believe it or not. And it makes sending that data through LLMs a breeze.
Vector databases are crucial for many machine learning and AI applications. For example, in a recommendation system, a user’s behavior can be represented as a vector, and the system can recommend items that are similar to the user’s behavior by performing a similarity search in the item vector database. In an image search engine, an image can be represented as a vector, and the system can find similar images by performing a similarity search in the image vector database.
There are several challenges in building and using vector databases. One challenge is the curse of dimensionality, which refers to the fact that as the dimensionality of the vectors increases, the time and space required to store and search the vectors also increases. To overcome this challenge, vector databases often use techniques like dimensionality reduction and approximate nearest neighbor search.
Another challenge is ensuring that the vectors accurately represent the original data. This requires careful selection and tuning of the embedding or encoding techniques used to transform the data into vectors.
Despite these challenges, vector databases are a powerful tool for handling high-dimensional vector data. They enable efficient storage, retrieval, and similarity search of vector data, making them a crucial component in many machine learning and AI applications. As the amount of data continues to grow and the complexity of machine learning models continues to increase, the importance of vector databases is likely to grow as well.
The development of Vector Databases was a critical component of managing data for LLMs. Without them, with legacy technology, we could not process the data in our LLMs as easily as we do now. And the next Vector Database Technology is the key behind everything that uses an LLM today — GPT-2, GPT-3, ChatGPT, GPT-4, Claude 2, AutoGPT, and even local database models that don’t need GPUs like Falcon.
Deep Lake
Deep Lake is the basis of supercharging LLM applications through its speed, scalability, and vector search functionalities as a vector database. Deep Lake serves as a critical backend to scale up transformers and LLMs for real-world impact.
The Need for Specialized Storage
Transformers like GPT-3.5 and GPT-4 are taking natural language processing to new heights. Unlike previous ML models, transformers process text as sequences of embedded vectors rather than matrices. This allows them to model context and long-range dependencies in language. Early results are promising — transformers can generate coherent text, translate between languages, and even write code based on examples.
However, transformers have immense storage and computational requirements. A single transformer can have billions of parameters, with each parameter represented as a dense vector embedding. Just storing these vectors necessitates huge amounts of memory and disk space. For example, GPT-3 has 175 billion parameters adding up to hundreds of gigabytes in size. Even the weighs of GPT-3 alone has a size of over 800 GB.
Processing and searching through such vast vector spaces poses additional challenges. Traditional databases like MySQL or MongoDB are designed for structured tabular data, not the high-dimensional vectors used in transformers. They lack native support for vector indexing, similarity search and other capabilities needed for transformer workloads.
This is where Deep Lake comes in. It provides a serverless cloud infrastructure optimized specifically for managing and querying large vector datasets. By building Deep Lake into applications like LangChain, transformers can be deployed at scale to solve real-world problems.
Serverless Architecture for Cloud Scale
A key innovation of Deep Lake is its serverless architecture. Deep Lake does away with always-on servers and databases. Instead, it spins up on-demand compute resources to handle vector workloads as needed.
This auto-scaling serverless approach brings several advantages:
▪ Cost savings — Pay only for the compute used instead of overprovisioning fixed servers.
▪ Flexibility — Scale up and down instantly based on real-time demands. No capacity planning needed.
▪ High availability — Serverless systems have built-in redundancy and failover. No single point of failure.
▪ Operational simplicity — Serverless abstracts away infrastructure management. No DevOps overhead.
Deep Lake runs seamlessly on serverless platforms like AWS Lambda, Azure Functions and Google Cloud Run. You can deploy Deep Lake clusters in minutes without managing any servers yourself. It will then auto-scale to handle terabytes of vector data while optimizing cost, performance and reliability.
This hands-off serverless experience is perfect for teams wanting to operationalize transformers fast without infrastructure headaches. The same Deep Lake deployment can support unpredictable workloads from prototypes to full-blown production applications.
Cloud-Native Design
In addition to being serverless, Deep Lake utilizes other cloud-native technologies for resilience and scale.
Deep Lake is natively integrated with object storage services like Amazon S3, Azure Blob and Google Cloud Storage. This allows it to separate storage and compute for better resource utilization. Vectors are stored durably and cheaply in cloud storage while compute spins up on-demand for processing.
Deep Lake also supports cloud-native orchestration using Kubernetes. The Deep Lake controller and workers can be deployed as microservices in Kubernetes clusters. This makes management easier through features like auto-scaling, rolling updates, service discovery and config management.
Finally, Deep Lake leverages cloud managed services like AWS DynamoDB and CloudWatch for coordination, locking, metrics and monitoring. This simplifies building reliability into Deep Lake without running your own coordination software.
Deep Lake is a remarkable answer to the problem of storing gigabytes of data for LLMs — efficiently, easily, and practically. Its unique configuration allows the optimal usage of finances. OpenAI’s LLM Operational Cost Daily is on average 700,000 USD a day. Some are even predicting bankruptcy for the company. But personally, I am sure that the minds that built GPT-4 are more than capable of monetizing the world-transforming technologies, even if they are not monetizing enough now.
Applications of LangChain, AutoGPT, and Deep Lake for Corporates on the Cloud
LangChain and AutoGPT are advanced AI technologies that can be leveraged for various corporate applications, especially when combined with cloud computing, such as
1. Global Communication
LangChain can be used to facilitate real-time translation in multinational corporations, breaking down language barriers and improving communication efficiency. This can be particularly useful in video conferencing and collaborative platforms hosted on the cloud.
2. Automated Content Generation:
AutoGPT can be used to generate a variety of content, from marketing materials and product descriptions to technical manuals and reports. This can significantly reduce the time and resources required for content creation.
3. Customer Support:
These AI models can be used to power cloud-based customer support systems, providing 24/7 assistance to customers around the world. They can handle a large volume of queries, reducing the need for human agents and improving response times.
4. Data Analysis and Insights:
Deep Lake, with its advanced data processing capabilities, can be used to analyze large volumes of corporate data stored in the cloud. It can provide insights into customer behavior, market trends, and operational efficiency, helping businesses make data-driven decisions.
5. Automated Workflow:
These AI technologies can be used to automate various business processes, from email responses and scheduling to data entry and document management. This can improve productivity and reduce the risk of human error.
6. Training and Development:
They can be used to create personalized training materials and conduct assessments, helping businesses improve their employee training programs.
7. Legal and Compliance:
These AI models can be used to analyze legal documents, ensure compliance with regulations, and even predict the outcome of legal cases. This can be particularly useful for businesses in highly regulated industries.
8. Supply Chain Management:
Deep Lake can be used to analyze supply chain data, predict demand, and optimize logistics, helping businesses improve their supply chain efficiency.
9. Cybersecurity:
These AI technologies can be used to detect and respond to cyber threats, protecting corporate data and systems. Hackers, crackers, cyber-criminals, ransomware — all these are terms we hear everywhere these days. If you’re not using AI for cybersecurity, you’re falling behind.
MetaGPT
MetaGPT, or multimodal Generative Pretrained Transformers, represents a significant leap in the evolution of artificial intelligence. This new generation of AI models is capable of understanding and generating information across multiple modes or types of data, including text, images, audio, and video. The potential power of these multimodal Generative AIs is vast, and their applications are only beginning to be explored.
This brings us as close to AGI we have ever come yet. Rest assured, our engineers and our authors are working full time to bring multi-modal LLMs and Agents as soon as possible. But why do we call t AGI? To fully appreciate the potential of MetaGPT, it’s important to understand the concept of multimodality. In the context of AI, multimodality refers to the ability of a model to process and understand different types of data simultaneously. Traditional AI models are unimodal, meaning they are designed to process a single type of data. For example, a text-based model like GPT-3 can understand and generate text, but it cannot process images or audio. In contrast, a multimodal model like MetaGPT can process text, images, audio, and video, all within a single model.
The ability to process multiple types of data simultaneously gives MetaGPT a much richer understanding of the world. For example, consider a scene from a movie. A unimodal text-based model could process the subtitles and understand the dialogue, but it would miss out on all the visual and auditory information. A unimodal image-based model could process the video and understand the visual information, but it would miss out on the dialogue and any non-visual cues in the audio. In contrast, MetaGPT could process the video, audio, and subtitles simultaneously, giving it a much richer and more complete understanding of the scene.
This improved understanding leads to more accurate and context-rich generation of content. For example, MetaGPT could generate a textual description of a movie scene that includes not only the dialogue but also descriptions of the visual and auditory information. This could be useful in a variety of applications, such as generating detailed descriptions for visually impaired individuals or creating rich, context-aware summaries of video content.
Of course, right now MetaGPT is not yet fully ready, but there are advances made by Google that seem to bring AGI to a MetaGPT robot. More on that in a later article.
Potential Applications[1]
Beyond improved understanding and generation, MetaGPT opens up a wide range of cross-modal applications. These are applications that require understanding and generation across different types of data. For example, MetaGPT could be used to generate a textual description of an image, convert text to speech, or even generate a video from a textual description. These applications could be useful in a variety of fields, from accessibility and content creation to education and entertainment.
Another potential application of MetaGPT is in improving interaction between humans and AI. Traditional AI models are limited in the types of input they can understand and the types of output they can generate. For example, a text-based model can understand text input and generate text output, but it cannot understand voice commands or generate visual output. In contrast, MetaGPT can understand input in a variety of formats, including text, voice, and images, and it can generate output in the most appropriate format. This could lead to more natural and intuitive interactions between humans and AI.
MetaGPT also has potential applications in creative fields. For example, it could be used to generate music from text, create art from descriptions, or generate stories from images. These applications could revolutionize the way we create and consume art, opening up new possibilities for creativity and expression.
Finally, MetaGPT could be used to analyze complex datasets that include different types of data. This could be particularly useful in fields like healthcare, where patient data can include text (medical records), images (scans), and numerical data (vital signs). By analyzing all of this data simultaneously, MetaGPT could provide a more complete and accurate understanding of a patient’s health.
In conclusion, the potential of multimodal Generative AIs like MetaGPT is vast. By understanding and generating information across multiple types of data, these models offer a richer and more complete understanding of the world. This opens up a wide range of applications, from improved content generation and cross-modal applications to improved interaction and creative applications. As these models continue to improve, they will likely become an integral part of many different fields and industries. The applications (potential) include:
[1] https://www.forbes.com/sites/forbestechcouncil/2023/07/20/the-power-of-domain-specific-llms-in-generative-ai-for-enterprises/?sh=7d3d72d01e50
Multimodal Applications
Improved Understanding and Generation: Multimodal AI models can understand the context better by analyzing different types of data simultaneously. For example, they can understand a scene in a movie by analyzing the video, audio, and subtitles together. This leads to more accurate and context-rich generation of content.
Cross-Modal Applications: These models can be used in applications that require understanding and generation across different types of data. For example, they can be used to generate a textual description of an image, convert text to speech, or even generate a video from a textual description.
Improved Interaction: Multimodal AI models can interact with users in a more natural and intuitive way. They can understand user inputs in different formats (text, voice, images) and generate responses in the most appropriate format.
Creative Applications: These models can be used in creative applications like generating music from text, creating art from descriptions, or generating stories from images.
Data Analysis: Multimodal AI models can analyze complex datasets that include different types of data. This can be useful in fields like healthcare, where patient data can include text (medical records), images (scans), and numerical data (vital signs).
This is massive. It will change the way the world operates. It will change the way people live, think, eat, and communicate. This will be how mankind expresses itself in the future.
The potential of multimodal Generative AIs like MetaGPT is vast, and we are just beginning to explore their capabilities. As these models continue to improve, they will likely become an integral part of many different fields and industries. As I mentioned, there is a new robot created by Google that is completely multimodal, and it’s making waves across the LLM and generative AI model fields. But as I said earlier, l save that for my next article. Soon, we hope to have LangChain-AutoGPT-Deep Lake projects on ProjectPro.
LLMs of the Future[1]
● AGI is coming soon. That threat / opportunity cannot be factored out. We need companies with ethics, honesty, and integrity. But when did a large conglomerate corporation In FAANG ever exhibit that?
● Also access to AI must be democratized. It should be made available for all. The HuggingFace Open-Source Ecosystem and website is a beautiful application of exactly that.
● AI should not serve 1% of humanity and ignore the remaining 99%. With climatic conditions as they are, that would be nothing short of genocide.
● How do we democratize Big Data, AI, and Data Science projects? Usually, they require 500,000 USD to build and extract value, and at the end of it you could still fail.
● This is where ProjectPro comes in. With over 250+ projects in Data Science, Artificial Intelligence, and Cloud Computing. The cost of a 500,000 USD project executed normally, is suddenly around 500 USD!
● Don’t believe me? Head over to ProjectPro.io website to find out more! We are now democratizing (for corporations) data science, big data, and AI end-to-end systems.
● Visit our ProjectPro website for a most pleasant surprise.
Cheers to you, whoever you are, wherever in the world you are. May the plans for your company truly come true. May you remain passionate, interested and joyful to stay on the cutting edge of both technology and the latest tech news. The stars, not the sky, are the limit with ProjectPro.
[1] https://www.forbes.com/sites/robtoews/2023/02/07/the-next-generation-of-large-language-models/?sh=5d125ded18db
References
https://en.wikipedia.org/wiki/Generative_artificial_intelligence
https://github.com/Significant-Gravitas/Auto-GPT/tree/master/autogpt/agents
Vaswani, A., Shazeer, N.M., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017). Attention is All you Need. NIPS.
Hong, S., Zheng, X., Chen, J., Cheng, Y., Zhang, C., Wang, Z., Yau, S.K., Lin, Z.H., Zhou, L., Ran, C., Xiao, L., & Wu, C. (2023). MetaGPT: Meta Programming for Multi-Agent Collaborative Framework.
Topsakal, O., & Akinci, T.C. (2023). Creating Large Language Model Applications Utilizing LangChain: A Primer on Developing LLM Apps Fast. International Conference on Applied Engineering and Natural Sciences.
Hambardzumyan, S., Tuli, A., Ghukasyan, L., Rahman, F., Topchyan, H., Isayan, D., Harutyunyan, M., Hakobyan, T., Stranic, I., & Buniatyan, D. (2022). Deep Lake: a Lakehouse for Deep Learning. ArXiv, abs/2209.10785.
Guo, R., Sun, P., Lindgren, E.M., Geng, Q., Simcha, D., Chern, F., & Kumar, S. (2019). Accelerating Large-Scale Inference with Anisotropic Vector Quantization. International Conference on Machine Learning.
All Images courtesy of Bing Image Creator.
Subscribe to my newsletter
Read articles from Thomas Cherickal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Thomas Cherickal
Thomas Cherickal
Online Store: https://thomascherickal.gumroad.com Resume: https://thomascherickal.com Portfolio: https://linktr.ee/thomascherickal Contact me at: thomascherickal@gmail.com Or (+91) 9884452724