Tag Archives: AI News

Using ideas from game theory to improve the reliability of language models

Imagine you and a friend are playing a game where your goal is to communicate secret messages to each other using only cryptic sentences. Your friend’s job is to guess the secret message behind your sentences. Sometimes, you give clues directly, and other times, your friend has to guess the message by asking yes-or-no questions about the clues you’ve given. The challenge is that both of you want to make sure you’re understanding each other correctly and agreeing on the secret message.

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have created a similar “game” to help improve how AI understands and generates text. It is known as a “consensus game” and it involves two parts of an AI system — one part tries to generate sentences (like giving clues), and the other part tries to understand and evaluate those sentences (like guessing the secret message).

The researchers discovered that by treating this interaction as a game, where both parts of the AI work together under specific rules to agree on the right message, they could significantly improve the AI’s ability to give correct and coherent answers to questions. They tested this new game-like approach on a variety of tasks, such as reading comprehension, solving math problems, and carrying on conversations, and found that it helped the AI perform better across the board.

Traditionally, large language models answer one of two ways: generating answers directly from the model (generative querying) or using the model to score a set of predefined answers (discriminative querying), which can lead to differing and sometimes incompatible results. With the generative approach, “Who is the president of the United States?” might yield a straightforward answer like “Joe Biden.” However, a discriminative query could incorrectly dispute this fact when evaluating the same answer, such as “Barack Obama.”

So, how do we reconcile mutually incompatible scoring procedures to achieve coherent, efficient predictions? 

“Imagine a new way to help language models understand and generate text, like a game. We’ve developed a training-free, game-theoretic method that treats the whole process as a complex game of clues and signals, where a generator tries to send the right message to a discriminator using natural language. Instead of chess pieces, they’re using words and sentences,” says Athul Jacob, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate. “Our way to navigate this game is finding the ‘approximate equilibria,’ leading to a new decoding algorithm called ‘equilibrium ranking.’ It’s a pretty exciting demonstration of how bringing game-theoretic strategies into the mix can tackle some big challenges in making language models more reliable and consistent.”

When tested across many tasks, like reading comprehension, commonsense reasoning, math problem-solving, and dialogue, the team’s algorithm consistently improved how well these models performed. Using the ER algorithm with the LLaMA-7B model even outshone the results from much larger models. “Given that they are already competitive, that people have been working on it for a while, but the level of improvements we saw being able to outperform a model that’s 10 times the size was a pleasant surprise,” says Jacob. 

Game on

“Diplomacy,” a strategic board game set in pre-World War I Europe, where players negotiate alliances, betray friends, and conquer territories without the use of dice — relying purely on skill, strategy, and interpersonal manipulation — recently had a second coming. In November 2022, computer scientists, including Jacob, developed “Cicero,” an AI agent that achieves human-level capabilities in the mixed-motive seven-player game, which requires the same aforementioned skills, but with natural language. The math behind this partially inspired the Consensus Game. 

While the history of AI agents long predates when OpenAI’s software entered the chat in November 2022, it’s well documented that they can still cosplay as your well-meaning, yet pathological friend. 

The consensus game system reaches equilibrium as an agreement, ensuring accuracy and fidelity to the model’s original insights. To achieve this, the method iteratively adjusts the interactions between the generative and discriminative components until they reach a consensus on an answer that accurately reflects reality and aligns with their initial beliefs. This approach effectively bridges the gap between the two querying methods. 

In practice, implementing the consensus game approach to language model querying, especially for question-answering tasks, does involve significant computational challenges. For example, when using datasets like MMLU, which have thousands of questions and multiple-choice answers, the model must apply the mechanism to each query. Then, it must reach a consensus between the generative and discriminative components for every question and its possible answers. 

The system did struggle with a grade school right of passage: math word problems. It couldn’t generate wrong answers, which is a critical component of understanding the process of coming up with the right one. 

“The last few years have seen really impressive progress in both strategic decision-making and language generation from AI systems, but we’re just starting to figure out how to put the two together. Equilibrium ranking is a first step in this direction, but I think there’s a lot we’ll be able to do to scale this up to more complex problems,” says Jacob.   

An avenue of future work involves enhancing the base model by integrating the outputs of the current method. This is particularly promising since it can yield more factual and consistent answers across various tasks, including factuality and open-ended generation. The potential for such a method to significantly improve the base model’s performance is high, which could result in more reliable and factual outputs from ChatGPT and similar language models that people use daily. 

“Even though modern language models, such as ChatGPT and Gemini, have led to solving various tasks through chat interfaces, the statistical decoding process that generates a response from such models has remained unchanged for decades,” says Google Research Scientist Ahmad Beirami, who was not involved in the work. “The proposal by the MIT researchers is an innovative game-theoretic framework for decoding from language models through solving the equilibrium of a consensus game. The significant performance gains reported in the research paper are promising, opening the door to a potential paradigm shift in language model decoding that may fuel a flurry of new applications.”

Jacob wrote the paper with MIT-IBM Watson Lab researcher Yikang Shen and MIT Department of Electrical Engineering and Computer Science assistant professors Gabriele Farina and Jacob Andreas, who is also a CSAIL member. They presented their work at the International Conference on Learning Representations (ICLR) earlier this month, where it was highlighted as a “spotlight paper.” The research also received a “best paper award” at the NeurIPS R0-FoMo Workshop in December 2023.

Evaluation of generative AI techniques for clinical report summarization

In part 1 of this blog series, we discussed how a large language model (LLM) available on Amazon SageMaker JumpStart can be fine-tuned for the task of radiology report impression generation. Since then, Amazon Web Services (AWS) has introduced new services such as Amazon Bedrock. This is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API.

Amazon Bedrock also comes with a broad set of capabilities required to build generative AI applications with security, privacy, and responsible AI. It’s serverless, so you don’t have to manage any infrastructure. You can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with. In this part of the blog series, we review techniques of prompt engineering and Retrieval Augmented Generation (RAG) that can be employed to accomplish the task of clinical report summarization by using Amazon Bedrock.

When summarizing healthcare texts, pre-trained LLMs do not always achieve optimal performance. LLMs can handle complex tasks like math problems and commonsense reasoning, but they are not inherently capable of performing domain-specific complex tasks. They require guidance and optimization to extend their capabilities and broaden the range of domain-specific tasks they can perform effectively. It can be achieved through the use of proper guided prompts. Prompt engineering helps to effectively design and improve prompts to get better results on different tasks with LLMs. There are many prompt engineering techniques.

In this post, we provide a comparison of results obtained by two such techniques: zero-shot and few-shot prompting. We also explore the utility of the RAG prompt engineering technique as it applies to the task of summarization. Evaluating LLMs is an undervalued part of the machine learning (ML) pipeline. It is time-consuming but, at the same time, critical. We benchmark the results with a metric used for evaluating summarization tasks in the field of natural language processing (NLP) called Recall-Oriented Understudy for Gisting Evaluation (ROUGE). These metrics will assess how well a machine-generated summary compares to one or more reference summaries.

Solution overview

In this post, we start with exploring a few of the prompt engineering techniques that will help assess the capabilities and limitations of LLMs for healthcare-specific summarization tasks. For more complex, clinical knowledge-intensive tasks, it’s possible to build a language model–based system that accesses external knowledge sources to complete the tasks. This enables more factual consistency, improves the reliability of the generated responses, and helps to mitigate the propensity that LLMs have to be confidently wrong, called hallucination.

Pre-trained language models

In this post, we experimented with Anthropic’s Claude 3 Sonnet model, which is available on Amazon Bedrock. This model is used for the clinical summarization tasks where we evaluate the few-shot and zero-shot prompting techniques. This post then seeks to assess whether prompt engineering is more performant for clinical NLP tasks compared to the RAG pattern and fine-tuning.

Dataset

The MIMIC Chest X-ray (MIMIC-CXR) Database v2.0.0 is a large publicly available dataset of chest radiographs in DICOM format with free-text radiology reports. We used the MIMIC CXR dataset, which can be accessed through a data use agreement. This requires user registration and the completion of a credentialing process.

During routine clinical care clinicians trained in interpreting imaging studies (radiologists) will summarize their findings for a particular study in a free-text note. Radiology reports for the images were identified and extracted from the hospital’s electronic health records (EHR) system. The reports were de-identified using a rule-based approach to remove any protected health information.

Because we used only the radiology report text data, we downloaded just one compressed report file (mimic-cxr-reports.zip) from the MIMIC-CXR website. For evaluation, the 2,000 reports (referred to as the ‘dev1’ dataset) from a subset of this dataset and the 2,000 radiology reports (referred to as ‘dev2’) from the chest X-ray collection from the Indiana University hospital network were used.

Techniques and experimentation

Prompt design is the technique of creating the most effective prompt for an LLM with a clear objective. Crafting a successful prompt requires a deeper understanding of the context, it’s the subtle art of asking the right questions to elicit the desired answers. Different LLMs may interpret the same prompt differently, and some may have specific keywords with particular meanings. Also, depending on the task, domain-specific knowledge is crucial in prompt creation. Finding the perfect prompt often involves a trial-and-error process.

Prompt structure

Prompts can specify the desired output format, provide prior knowledge, or guide the LLM through a complex task. A prompt has three main types of content: input, context, and examples. The first of these specifies the information for which the model needs to generate a response. Inputs can take various forms, such as questions, tasks, or entities. The latter two are optional parts of a prompt. Context is providing relevant background to ensure the model understands the task or query, such as the schema of a database in the example of natural language querying. Examples can be something like adding an excerpt of a JSON file in the prompt to coerce the LLM to output the response in that specific format. Combined, these components of a prompt customize the response format and behavior of the model.

Prompt templates are predefined recipes for generating prompts for language models. Different templates can be used to express the same concept. Hence, it is essential to carefully design the templates to maximize the capability of a language model. A prompt task is defined by prompt engineering. Once the prompt template is defined, the model generates multiple tokens that can fill a prompt template. For instance, “Generate radiology report impressions based on the following findings and output it within <impression> tags.” In this case, a model can fill the <impression> with tokens.

Zero-shot prompting

Zero-shot prompting means providing a prompt to a LLM without any (zero) examples. With a single prompt and no examples, the model should still generate the desired result. This technique makes LLMs useful for many tasks. We have applied zero-shot technique to generate impressions from the findings section of a radiology report.

In clinical use cases, numerous medical concepts need to be extracted from clinical notes. Meanwhile, very few annotated datasets are available. It’s important to experiment with different prompt templates to get better results. An example zero-shot prompt used in this work is shown in Figure 1.

Figure 1 – Zero-shot prompting

Few-shot prompting

The few-shot prompting technique is used to increase performance compared to the zero-shot technique. Large, pre-trained models have demonstrated remarkable capabilities in solving an abundance of tasks by being provided only a few examples as context. This is known as in-context learning, through which a model learns a task from a few provided examples, specifically during prompting and without tuning the model parameters. In the healthcare domain, this bears great potential to vastly expand the capabilities of existing AI models.

Figure 2 – Few-shot prompting

Few-shot prompting uses a small set of input-output examples to train the model for specific tasks. The benefit of this technique is that it doesn’t require large amounts of labeled data (examples) and performs reasonably well by providing guidance to large language models.
In this work, five examples of findings and impressions were provided to the model for few-shot learning as shown in Figure 2.

Retrieval Augmented Generation pattern

The RAG pattern builds on prompt engineering. Instead of a user providing relevant data, an application intercepts the user’s input. The application searches across a data repository to retrieve content relevant to the question or input. The application feeds this relevant data to the LLM to generate the content. A modern healthcare data strategy enables the curation and indexing of enterprise data. The data can then be searched and used as context for prompts or questions, assisting an LLM in generating responses.

To implement our RAG system, we utilized a dataset of 95,000 radiology report findings-impressions pairs as the knowledge source. This dataset was uploaded to Amazon Simple Service (Amazon S3) data source and then ingested using Knowledge Bases for Amazon Bedrock. We used the Amazon Titan Text Embeddings model on Amazon Bedrock to generate vector embeddings.

Embeddings are numerical representations of real-world objects that ML systems use to understand complex knowledge domains like humans do. The output vector representations were stored in a newly created vector store for efficient retrieval from the Amazon OpenSearch Serverless vector search collection. This leads to a public vector search collection and vector index setup with the required fields and necessary configurations. With the infrastructure in place, we set up a prompt template and use RetrieveandGenerate API for vector similarity search. Then, we use the Anthropic Claude 3 Sonnet model for impressions generation. Together, these components enabled both precise document retrieval and high-quality conditional text generation from the findings-to-impressions dataset.

The following reference architecture diagram in Figure 3 illustrates the fully managed RAG pattern with Knowledge Bases for Amazon Bedrock on AWS. The fully managed RAG provided by Knowledge Bases for Amazon Bedrock converts user queries into embeddings, searches the knowledge base, obtains relevant results, augments the prompt, and then invokes an LLM (Claude 3 Sonnet) to generate the response.

Figure 3 – Retrieval Augmented Generation pattern

Prerequisites

You need to have the following to run this demo application:

An AWS account
Basic understanding of how to navigate Amazon SageMaker Studio
Basic understanding of how to download a repo from GitHub
Basic knowledge of running a command on a terminal

Key steps in implementation

Following are key details of each technique

Zero-shot prompting

prompt_zero_shot = “””Human: Generate radiology report impressions based on the following findings and output it within &amp;lt;impression&amp;gt; tags. Findings: {} Assistant:”””

Few-shot prompting

examples_string = ” for ex in examples: examples_string += f”””H:{ex[‘findings’]}
A:{ex[‘impression’]}n”””
prompt_few_shot = “””Human: Generate radiology report impressions based on the following findings. Findings: {}
Here are a few examples: “”” + examples_string + “””
Assistant:”””

Implementation of Retrieval Augmented Generation

Load the reports into the Amazon Bedrock knowledge base by connecting to the S3 bucket (data source).
The knowledge base will split them into smaller chunks (based on the strategy selected), generate embeddings, and store them in the associated vector store. For detailed steps, refer to the Amazon Bedrock User Guide. We used Amazon Titan Embeddings G1 – Text embedding model for converting the reports data to embeddings.
Once the knowledge base is up and running, locate the knowledge base id and generate model Amazon Resource Number (ARN) for Claude 3 Sonnet model using the following code:

kb_id = “XXXXXXXXXX” #Replace it with the knowledge base id for your knowledge base
model_id = “anthropic.claude-3-sonnet-20240229-v1:0”
model_arn = f’arn:aws:bedrock:{region_id}::foundation-model/{model_id}’

Set up the Amazon Bedrock runtime client using the latest version of AWS SDK for Python (Boto3).

bedrock_config = Config(connect_timeout=120, read_timeout=120, retries={‘max_attempts’: 0})
bedrock_client = boto3.client(‘bedrock-runtime’)
bedrock_agent_client = boto3.client(“bedrock-agent-runtime”, config=bedrock_config)
boto3_session = boto3.session.Session()
region_name = boto3_session.region_name

Use the RetrieveAndGenerate API to retrieve the most relevant report from the knowledge base and generate an impression.

return bedrock_agent_client.retrieve_and_generate(
input={
‘text’: input
},
retrieveAndGenerateConfiguration={
‘knowledgeBaseConfiguration’: {
‘generationConfiguration’: {
‘promptTemplate’: {
‘textPromptTemplate’: promptTemplate
}
},
‘knowledgeBaseId’: kbId,
‘modelArn’: model_arn,
‘retrievalConfiguration’: {
‘vectorSearchConfiguration’: {
‘numberOfResults’: 3,
‘overrideSearchType’: ‘HYBRID’
}
}

},
‘type’: ‘KNOWLEDGE_BASE’

},
)

Use the following prompt template along with query (findings) and retrieval results to generate impressions with the Claude 3 Sonnet LLM.

promptTemplate = f”””
You have to generate radiology report impressions based on the following findings. Your job is to generate impression using only information from the search results.
Return only a single sentence and do not return the findings given.

Findings: $query$

Here are the search results in numbered order:
$search_results$ “””

Evaluation

Performance analysis

The performance of zero-shot, few-shot, and RAG techniques is evaluated using the ROUGE score. For more details on the definition of various forms of this score, please refer to part 1 of this blog.

The following table depicts the evaluation results for the dev1 and dev2 datasets. The evaluation result on dev1 (2,000 findings from the MIMIC CXR Radiology Report) shows that the zero-shot prompting performance was the poorest, whereas the RAG approach for report summarization performed the best. The use of the RAG technique led to substantial gains in performance, improving the aggregated average ROUGE1 and ROUGE2 scores by approximately 18 and 16 percentage points, respectively, compared to the zero-shot prompting method. An approximately 8 percentage point improvement is observed in aggregated ROUGE1 and ROUGE2 scores over the few-shot prompting technique.

Model
Technique
Dataset: dev1
Dataset: dev2

.
.
ROUGE1
ROUGE2
ROUGEL
ROUGELSum
ROUGE1
ROUGE2
ROUGEL
ROUGELSum

Claude 3
Zero-shot
0.242
0.118
0.202
0.218
0.210
0.095
0.185
0.194

Claude 3
Few-shot
0.349
0.204
0.309
0.312
0.439
0.273
0.351
0.355

Claude 3
RAG
0.427
0.275
0.387
0.387
0.438
0.309
0.43
0.43

For dev2, an improvement of approximately 23 and 21 percentage points is observed in ROUGE1 and ROUGE2 scores of the RAG-based technique over zero-shot prompting. Overall, RAG led to an improvement of approximately 17 percentage points and 24 percentage points in ROUGELsum scores for the dev1 and dev2 datasets, respectively. The distribution of ROUGE scores attained by RAG technique for dev1 and dev2 datasets is shown in the following graphs.


Dataset: dev1
Dataset: dev2

It is worth noting that RAG attains consistent average ROUGELSum for both test datasets (dev1=.387 and dev2=.43). This is in contrast to the average ROUGELSum for these two test datasets (dev1=.5708 and dev2=.4525) attained with the fine-tuned FLAN-T5 XL model presented in part 1 of this blog series. Dev1 is a subset of the MIMIC dataset, samples from which have been used as context. With the RAG approach, the median ROUGELsum is observed to be almost similar for both datasets dev2 and dev1.

Overall, RAG is observed to attain good ROUGE scores but falls short of the impressive performance of the fine-tuned FLAN-T5 XL model presented in part 1 of this blog series.

Cleanup

To avoid incurring future charges, delete all the resources you deployed as part of the tutorial.

Conclusion

In this post, we presented how various generative AI techniques can be applied for healthcare-specific tasks. We saw incremental improvement in results for domain-specific tasks as we evaluated and compared prompting techniques and the RAG pattern. We also see how fine-tuning the model to healthcare-specific data is comparatively better, as demonstrated in part 1 of the blog series. We expect to see significant improvements with increased data at scale, more thoroughly cleaned data, and alignment to human preference through instruction tuning or explicit optimization for preferences.

Limitations: This work demonstrates a proof of concept. As we analyzed deeper, hallucinations were observed occasionally.

About the authors

Ekta Walia Bhullar, PhD, is a senior AI/ML consultant with AWS Healthcare and Life Sciences (HCLS) professional services business unit. She has extensive experience in the application of AI/ML within the healthcare domain, especially in radiology. Outside of work, when not discussing AI in radiology, she likes to run and hike.

Priya Padate is a Senior Partner Solutions Architect with extensive expertise in Healthcare and Life Sciences at AWS. Priya drives go-to-market strategies with partners and drives solution development to accelerate AI/ML-based development. She is passionate about using technology to transform the healthcare industry to drive better patient care outcomes.

Dr. Adewale Akinfaderin is a senior data scientist in healthcare and life sciences at AWS. His expertise is in reproducible and end-to-end AI/ML methods, practical implementations, and helping global healthcare customers formulate and develop scalable solutions to interdisciplinary problems. He has two graduate degrees in physics and a doctorate in engineering.

Srushti Kotak is an Associate Data and ML Engineer at AWS Professional Services. She has a strong data science and deep learning background with experience in developing machine learning solutions, including generative AI solutions, to help customers solve their business challenges. In her spare time, Srushti loves to dance, travel, and spend time with friends and family.

When consumers would prefer a chatbot over a person

Actually, sometimes consumers don’t want to talk to a real person when they’re shopping online, a new study suggests. In fact, what they really want is a chatbot that makes it clear that it is not human at all. In a new study, researchers found that people preferred interacting with chatbots when they felt embarrassed about what they were buying online — items like antidiarrheal medicine or, for some people, skin care products.

Enhanced autoscaling with VASIM: Vertical Autoscaling Simulator Toolkit

This research was presented as a demonstration at the 40th IEEE International Conference on Data Engineering (opens in new tab) (ICDE 2024), one of the premier conferences on data and information engineering.

Since the inception of cloud computing, autoscaling has been an essential technique for optimizing resources and performance. By dynamically adjusting the number of computing resources allocated to a service based on current demand, autoscaling ensures that the service can handle the load efficiently while optimizing costs. However, developing and fine-tuning autoscaling algorithms, which govern this process, present significant challenges. The complexity and cost associated with testing these algorithms can lead to inefficient resource management and impede the development of more effective autoscaling strategies.

In our paper, “VASIM: Vertical Autoscaling Simulator Toolkit,” presented at ICDE 2024, we introduce a tool designed to address the complexities involved in assessing autoscaling algorithms. While existing simulation tools cover a range of capabilities, such as energy efficiency and fault tolerance, VASIM stands out by evaluating the critical recommender component within the algorithm and suggesting optimal resource scaling actions based on usage data, balancing performance and cost. This enables developers to iterate more rapidly, enhancing algorithmic performance, and improving resource efficiency and cost savings.

VASIM’s user-friendly interface simplifies the evaluation of autoscaling policies, as illustrated in Figure 1. First steps entail uploading historical data and defining autoscaling policies, including the algorithm and its parameters, shown in the left panel. The Simulation Run feature enables the modification of algorithm parameters, imported via a configuration file, and the execution of simulations based on the selected trace. A results screen displays the CPU limits determined by the selected policies as well as the actual CPU usage tailored to these limits. Additionally, VASIM provides fundamental metrics like throttling incidents, number of scaling operations, and amount of unused capacity, or slack, for the current simulation.

Figure 1. The VASIM user interface comprises a run simulation pane on the left and a results pane on the right.

VASIM achieves several important goals:

Resource efficiency and cost reduction. VASIM reduces costs by removing the need to test scaling operations in real-time, which would be resource intensive. This enables developers to adjust algorithms iteratively in a controlled, cost-efficient environment, accelerating development cycles. Because the tool allows users to upload CPU performance history and algorithm parameters, it delivers the results of scaling operations across the entire workload in minutes rather than hours.

Multi-objective optimization. It’s challenging to develop an autoscaling method that handles conflicting parameters. VASIM makes this easier by applying Pareto optimization techniques (opens in new tab), helping developers to find a balance among key metrics. Figure 2 depicts scatter plots for two metrics: average slack and average insufficient CPU. It also shows three optimization objectives: the optimal amount of slack, throttling, and number of scaling operations.

Figure 2. The 2D diagram on the left shows a scatter plot of tuning with Pareto points. The 3D graph on the right shows a scatter plot with the three objectives.

Recommender algorithm testing. VASIM simplifies the process of testing and evaluating recommendation algorithms across diverse workloads. With all tuning jobs running in parallel, computation occurs more quickly, allowing users to efficiently adjust their recommender parameters as necessary. To assess the algorithm’s generalizability, we ran VASIM against 11 available open cluster traces (opens in new tab) for benchmarking and internal product workload traces. This enabled us to evaluate the algorithms’ robustness across a variety of workload types, including cyclical, bursty, and monotonic variations, demonstrating their reliability across different scenarios.

Versatility and adaptability. VASIM provides users with the flexibility to modify components, experiment with recommendation strategies, and evaluate the impact of changes in a controlled and customizable environment. Figure 3 shows the results of a simulation run on the same algorithm and historical performance data but with different parameters. This versatility ensures that infrastructure engineers can tailor the system to meet their needs, enhancing the overall effectiveness of their autoscaling strategies.

Figure 3. These graphs show VASIM running an identical algorithm on the same historical data but with varying parameters, affecting slack, throttling, and the frequency of scaling events. The objective is to maintain a minimal gap between the peak and the lowest resource utilization levels—the top of the bottom line and the bottom of the top line, respectively. The goal is also to reduce the space between the response lag indicated by the trailing edges to the left of the lines. Simultaneously, it’s important to minimize the occurrence of scaling events to prevent disruptions in workload execution.

Optimizing scalability and costs in Kubernetes environments

Our research on vertically autoscaling monolithic applications with a container-as-a-service algorithm (opens in new tab) helped us to better understand the tradeoffs between cost and availability that different algorithm variations introduce. Because VASIM is similar to standard autoscaling architecture (as in the Kubernetes Vertical Pod Autoscaler (opens in new tab) [VPA]) it allows us to test autoscaling algorithms for pods, applications, and virtual machine (VM) capacity. This is possible because these systems share similar components, including resource updaters, controllers, and recommenders. Despite differences in specific systems, their underlying architectures are sufficiently similar, enabling VASIM to effectively mimic them, as shown in Figure 4.

 

Figure 4. VASIM architecture mimics the main components of general autoscaling architectures, allowing users to parametrize those modules to fit their specific needs.

 

Implications and looking ahead

Looking forward, we plan to broaden the scope of VASIM’s support beyond just CPUs to include a wide range of resources, such as memory, disk I/O, and network bandwidth. This expansion will provide future users with a comprehensive understanding of system performance and enable them to make more accurate decisions regarding system management and resource optimization. Additionally, a deeper understanding of system performance will help inform proactive optimization strategies focused on maximizing system efficiency and performance.

Opens in a new tab

The post Enhanced autoscaling with VASIM: Vertical Autoscaling Simulator Toolkit appeared first on Microsoft Research.

MatterSim: A deep-learning model for materials under real-world conditions

In the quest for groundbreaking materials crucial to nanoelectronics, energy storage, and healthcare, a critical challenge looms: predicting a material’s properties before it is even created. This is no small feat, with any combination of 118 elements in the periodic table, and the range of temperatures and pressures under which materials are synthesized and operated. These factors drastically affect atomic interactions within materials, making accurate property prediction and behavior simulation exceedingly demanding.

Here at Microsoft Research, we developed MatterSim, a deep-learning model for accurate and efficient materials simulation and property prediction over a broad range of elements, temperatures, and pressures to enable the in silico materials design. MatterSim employs deep learning to understand atomic interactions from the very fundamental principles of quantum mechanics, across a comprehensive spectrum of elements and conditions—from 0 to 5,000 Kelvin (K), and from standard atmospheric pressure to 10,000,000 atmospheres. In our experiment, MatterSim efficiently handles simulations for a variety of materials, including metals, oxides, sulfides, halides, and their various states such as crystals, amorphous solids, and liquids. Additionally, it offers customization options for intricate prediction tasks by incorporating user-provided data.

Figure 1. MatterSim can model materials properties and behaviors under realistic temperature and pressure conditions for wide ranges of applications.

Simulating materials under realistic conditions across the periodic table

MatterSim’s learning foundation is built on large-scale synthetic data, generated through a blend of active learning, generative models, and molecular dynamics simulations. This data generation strategy ensures extensive coverage of material space, enabling the model to predict energies, atomic forces, and stresses. It serves as a machine-learning force field with a level of accuracy compatible with first-principles predictions. Notably, MatterSim achieves a10-fold increase in accuracy for material property predictions at finite temperatures and pressures when compared to previous state-of-the-art models. Our research demonstrates its proficiency in simulating a vast array of material properties, including thermal, mechanical, and transport properties, and can even predict phase diagrams.

Figure 2. MatterSim achieves high accuracy in predicting mechanical properties, vibrational properties, and phases diagrams of material comparable to quantum mechanics and experimental measurements. The figure shows the comparison between the predicted properties and the experimental measured results. 

Adapting to complex design tasks

While trained on broad synthetic datasets, MatterSim is also adaptable for specific design requirements by incorporating additional data. The model utilizes active learning and fine-tuning to customize predictions with high data efficiency. For example, simulating water properties — a task seemingly straightforward but computationally intensive — is significantly optimized with MatterSim’s adaptive capability. The model requires only 3% of the data compared to traditional methods, to match experimental accuracy that would otherwise require 30 times more resources for a specialized model and exponentially more for first-principles methods.

Figure 3. MatterSim achieves high data efficiency with 90%-97% data save for complex simulation tasks.

MICROSOFT RESEARCH PODCAST

AI Frontiers: The future of scale with Ahmed Awadallah and Ashley Llorens

This episode features Senior Principal Research Manager Ahmed H. Awadallah, whose work improving the efficiency of large-scale AI models and efforts to help move advancements in the space from research to practice have put him at the forefront of this new era of AI.


Opens in a new tab

Bridging the gap between atomistic models and real-world measurements

Translating material properties from atomic structures is a complex task, often too intricate for current methods based on statistics, such as molecular dynamics. MatterSim addresses this by mapping these relationships directly through machine learning. It incorporates custom adaptor modules that refine the model to predict material properties from structural data, eliminating the need for intricate simulations. Benchmarking against MatBench (opens in new tab), a renowned material property prediction benchmark set, MatterSim demonstrates significant accuracy improvement and outperforms all specialized property-specific models, showcasing its robust capability in direct material property prediction from domain-specific data.

Looking ahead 

As MatterSim research advances, the emphasis is on experimental validation to reinforce its potential role in pivotal sectors, including the design of catalysts for sustainability, energy storage breakthroughs, and nanotechnology advancements. The planned integration of MatterSim with generative AI models and reinforcement learning heralds a new era in the systematic pursuit of novel materials. This synergy is expected to revolutionize the field, streamlining guided creation of materials tailored for diverse applications ranging from semiconductor technologies to biomedical engineering. Such progress promises to expedite material development and bolster sustainable industrial practices, thereby fostering technological advancements that will benefit society. 

Opens in a new tab

The post MatterSim: A deep-learning model for materials under real-world conditions appeared first on Microsoft Research.

The power of App Inventor: Democratizing possibilities for mobile applications

In June 2007, Apple unveiled the first iPhone. But the company made a strategic decision about iPhone software: its new App Store would be a walled garden. An iPhone user wouldn’t be able to install applications that Apple itself hadn’t vetted, at least not without breaking Apple’s terms of service.

That business decision, however, left educators out in the cold. They had no way to bring mobile software development — about to become part of everyday life — into the classroom. How could a young student code, futz with, and share apps if they couldn’t get it into the App Store?

MIT professor Hal Abelson was on sabbatical at Google at the time, when the company was deciding how to respond to Apple’s gambit to corner the mobile hardware and software market. Abelson recognized the restrictions Apple was placing on young developers; Google recognized the market need for an open-source alternative operating system — what became Android. Both saw the opportunity that became App Inventor.

“Google started the Android project sort of in reaction to the iPhone,” Abelson says. “And I was there, looking at what we did at MIT with education-focused software like Logo and Scratch, and said ‘what a cool thing it would be if kids could make mobile apps also.’”

Google software engineer Mark Friedman volunteered to work with Abelson on what became “Young Android,” soon renamed Google App Inventor. Like Scratch, App Inventor is a block-based language, allowing programmers to visually snap together pre-made “blocks” of code rather than need to learn specialized programming syntax.

Friedman describes it as novel for the time, particularly for mobile development, to make it as easy as possible to build simple mobile apps. “That meant a web-based app,” he says, “where everything was online and no external tools were required, with a simple programming model, drag-and-drop user interface designing, and blocks-based visual programming.” Thus an app someone programmed in a web interface could be installed on an Android device.

App Inventor scratched an itch. Boosted by the explosion in smartphone adoption and the fact App Inventor is free (and eventually open source), soon more than 70,000 teachers were using it with hundreds of thousands of students, with Google providing the backend infrastructure to keep it going.

“I remember answering a question from my manager at Google who asked how many users I thought we’d get in the first year,” Friedman says. “I thought it would be about 15,000 — and I remember thinking that might be too optimistic. I was ultimately off by a factor of 10–20.” Friedman was quick to credit more than their choices about the app. “I think that it’s fair to say that while some of that growth was due to the quality of the tool, I don’t think you can discount the effect of it being from Google and of the effect of Hal Abelson’s reputation and network.”

Some early apps took App Inventor in ambitious, unexpected directions, such as “Discardious,” developed by teenage girls in Nigeria. Discardious helped business owners and individuals dispose of waste in communities where disposal was unreliable or too cumbersome.

But even before apps like Discardious came along, the team knew Google’s support wouldn’t be open-ended. No one wanted to cut teachers off from a tool they were thriving with, so around 2010, Google and Abelson agreed to transfer App Inventor to MIT. The transition meant major staff contributions to recreate App Inventor without Google’s proprietary software but MIT needing to work with Google to continue to provide the network resources to keep App Inventor free for the world.

With such a large user base, however, that left Abelson “worried the whole thing was going to collapse” without Google’s direct participation.

Friedman agrees. “I would have to say that I had my fears. App Inventor has a pretty complicated technical implementation, involving multiple programming languages, libraries and frameworks, and by the end of its time at Google we had a team of about 10 people working on it.”

Yet not only did Google provide significant funding to aid the transfer, but, Friedman says of the transfer’s ultimate success, “Hal would be in charge and he had fairly extensive knowledge of the system and, of course, had great passion for the vision and the product.”

MIT enterprise architect Jeffrey Schiller, who built the Institute’s computer network and became its manager in 1984, was another key part in sustaining App Inventor after its transition, helping introduce technical features fundamental to its accessibility and long-term success. He led the integration of the platform into web browsers, the addition of WiFi support rather than needing to connect phones and computers via USB, and the laying of groundwork for technical support of older phones because, as Schiller says, “many of our users cannot rush out and purchase the latest and most expensive devices.”

These collaborations and contributions over time resulted in App Inventor’s greatest resource: its user base. As it grew, and with support from community managers, volunteer know-how grew with it. Now, more than a decade since its launch, App Inventor recently crossed several major milestones, the most remarkable being the creation of its 100 millionth project and registration of its 20 millionth user. Young developers continue to make incredible applications, boosted now by the advantages of AI. College students created “Brazilian XôDengue” as a way for users to use phone cameras to identify mosquito larvae that may be carrying the dengue virus. High school students recently developed “Calmify,” a journaling app that uses AI for emotion detection. And a mother in Kuwait wanted something to help manage the often-overwhelming experience of new motherhood when returning to work, so she built the chatbot “PAM (Personal Advisor to Mothers)” as a non-judgmental space to talk through the challenges.

App Inventor’s long-term sustainability now rests with the App Inventor Foundation, created in 2022 to grow its resources and further drive its adoption. It is led by executive director Natalie Lao.

In a letter to the App Inventor community, Lao highlighted the foundation’s commitment to equitable access to educational resources, which for App Inventor required a rapid shift toward AI education — but in a way that upholds App Inventor’s core values to be “a free, open-source, easy-to-use platform” for mobile devices. “Our mission is to not only democratize access to technology,” Lao wrote, “but also foster a culture of innovation and digital literacy.”

Within MIT, App Inventor today falls under the umbrella of the MIT RAISE Initiative — Responsible AI for Social Empowerment and Education, run by Dean for Digital Learning Cynthia Breazeal, Professor Eric Klopfer, and Abelson. Together they are able to integrate App Inventor into ever-broader communities, events, and funding streams, leading to opportunities like this summer’s inaugural AI and Education Summit on July 24-26. The summit will include awards for winners of a Global AI Hackathon, whose roughly 180 submissions used App Inventor to create AI tools in two tracks: Climate & Sustainability and Health & Wellness. Tying together another of RAISE’s major projects, participants were encouraged to draw from Day of AI curricula, including its newest courses on data science and climate change.

“Over the past year, there’s been an enormous mushrooming in the possibilities for mobile apps through the integration of AI,” says Abelson. “The opportunity for App Inventor and MIT is to democratize those new possibilities for young people — and for everyone — as an enhanced source of power and creativity.”

AWS DeepRacer enables builders of all skill levels to upskill and get started with machine learning

In today’s technological landscape, artificial intelligence (AI) and machine learning (ML) are becoming increasingly accessible, enabling builders of all skill levels to harness their power. As more companies adopt AI solutions, there’s a growing need to upskill both technical and non-technical teams in responsibly expanding AI usage. Getting hands-on experience is crucial for understanding and applying ML concepts to automate tasks like content generation, language translation, and image classification. And that’s where AWS DeepRacer comes into play—a fun and exciting way to learn ML fundamentals.

Launched in 2019, DeepRacer is a fully managed service that enables builders of all skill levels to learn and perform model training and evaluation tasks such as defining a reward function, setting up the training parameters, and configuring a training job that can be evaluated and monitored for model performance in a simulated environment. By exploring the AWS DeepRacer ML training lifecycle, you’ll practice model training, evaluation, and deployment of ML models onto a 1/18th scale autonomous race car, using a human-in-the-loop experience. The model training and evaluation experience enables builders to familiarize themselves with similar concepts applicable in training and fine-tuning foundation models (FMs) that power generative AI applications.

AWS DeepRacer also offers a global racing league for competing alongside a community of ML enthusiasts, earning rewards and recognition while showcasing your ML skills. Through the AWS DeepRacer League, we have educated over 550,000 developers, crowned five AWS DeepRacer champions, recognized over 100 monthly virtual circuit winners, and rewarded over 10,000 participants worldwide with Amazon gift cards, cash prizes, and paid trips to AWS re:Invent to compete for the annual AWS DeepRacer Championship Cup.

The excitement around AWS DeepRacer extends far beyond just individual learners. To celebrate Women’s History Month, JPMorgan Chase & Co. recently hosted the “World’s Largest Global Women’s AWS DeepRacer League,” providing employees with a thrilling opportunity to gain hands-on ML experience through virtual autonomous vehicle racing. This event not only fostered a spirit of friendly competition but also celebrated empowerment and innovation in AI and ML. By embracing AWS DeepRacer, JPMorgan Chase showcased its commitment to democratizing ML knowledge and nurturing a culture of continuous learning, empowering its talented teams to drive the company’s AI transformation.

“I am super proud of the group, the firm and the TIF (Take it Forward) team. . . I couldn’t be more proud of a group of individuals being so self-motivated.  The sky is the limit from here!  Deep Racer is proof that learning can be fun.”

Ebele Kemery, Head of JPMorgan Chase Tech, Data and AI Learning.

Initiatives like these demonstrate the far-reaching impact of AWS DeepRacer in bringing ML education to the forefront, inspiring learners of all backgrounds to embrace the future of intelligent technologies.

Whether you’re a seasoned developer or curious business professional, AWS DeepRacer provides a fun and exciting way to get started with AI. You’ll gain practical skills applicable to real-world ML and generative AI use cases. So get rolling with machine learning today!

About the authors

Ange Krueger is a principal AWS technologist. She leads product portfolio advancements and technological agility within the global financial sector. Utilizing over 200 AWS cloud services including leading AWS Artificial Intelligence, Machine Learning and Generative AI offerings, she delivers innovation, transformation, and scalable solutions that precisely address the complex demands of our global customers. Through a collaborative approach and a laser focus on customer-centric outcomes, Ange enhances customer experiences to achieve optimized business performance. Her commitment to continual improvement and customer obsession is unwavering, as she works to empower our clients with resilient, cloud-based financial services solutions.

5 Stoic Ideas for a Good Life

including Quotes to Live By

Photo by Daniel Monteiro on Unsplash

1. Dichotomy of Control

The dichotomy of control is about ‘controlling the controllables’.

Control what you can and leave the rest. Never give your ‘freedom to choose’ to anyone else.

“We cannot control the external events around us, but we can control our reactions to them.”— Epictetus

Here’s one from Victor Frankl,

“Everything can be taken from a man but one thing . . . to choose one’s attitude in any given set of circumstances.”— Victor Frankl, Man’s Search for Meaning

2. Rule of Life

Make it your life’s goal to ‘search for truth’.

“Seek ye first the good things of the mind,” Bacon admonishes us, “and the rest will either be supplied or its loss will not be felt.”“Truth will not make us rich, but it will make us free.”— Will DurantPhoto by Helena Lopes on Unsplash

3. Facing Anxiety

Don’t suffer from ‘Imagined Troubles’.

The one who suffers before it is necessary suffers twice.

Today I escaped anxiety. Or no, I discarded it, because it was within me, in my own perceptions — not outside.”― Marcus Aurelius, Meditations

this one is from Seneca,

We suffer more in imagination than in reality— Seneca

4. How to face Obstacles

According to the stoics, our obstacles give us the opportunity to practice the 4 stoic virtues of wisdom, courage, temperance or moderation, and justice in our daily lives.

Stoic believe in living a life in accordance with nature.

The impediment to action advances action, what stands in the way becomes the way.— Marcus Aurelius

5. On Revenge

Give up the feeling of revenge because you’re going to inflict more pain to yourself.

The best form of revenge is to not be like them.

The best revenge is to be unlike him who performed the injustice.”— Marcus Aurelius

Conclusion

The Stoics believed that the practice of virtue is enough to achieve ‘Eudaimonia’: a well-lived life.

The Stoic principles include living according to nature, controlling your perspective, managing expectations, negative visualization, re-framing, acceptance, and contemplating death.

By living according to these principles, you will stress less about things that don’t matter and live your life to the fullest.

5 Stoic Ideas for a Good Life was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Transforming Imagery with AI: Exploring Generative Models and the Segment Anything Model (SAM)

Generative models have redefined what’s possible in computer vision, enabling innovations once only imaginable in science fiction. One breakthrough tool is the Segment Anything Model (SAM), which has dramatically simplified isolating subjects in images. In this blog, we’ll explore an application leveraging SAM and text-to-image diffusion models to give users unprecedented control over digital environments. Through SAM’s ability to manipulate imagery paired with diffusion models’ capacity to generate scenes from text, this app allows transforming images in groundbreaking ways.

Project Overview

The goal is to build a web app that allows a user to upload an image, use SAM to create a segmentation mask highlighting the main subject, and then use Stable Diffusion inpainting to generate a new background based on a text prompt. The result is a seamlessly modified image that aligns with the user’s vision.

How It Works

Image Upload and Subject Selection: Users start by uploading an image and selecting the main object they wish to isolate. This selection triggers SAM to generate a precise mask around the object.Mask Refinement: SAM’s initial mask can be refined by the user, adding or removing points to ensure accuracy. This interactive step ensures that the final mask perfectly captures the subject.Background or Subject Modification: Once the mask is finalized, users can specify a new background or a different subject through a text prompt. An infill model processes this prompt to generate the desired changes, integrating them into the original image to produce a new, modified version.Final Touches: Users have the option to further tweak the result, ensuring the modified image meets their expectations.

Implementation and Model

I used SAM (Segment Anything Model) from Meta to handle the segmentation. This model can create high-quality masks with just a couple of clicks to mark the object’s location.

Stable Diffusion uses diffusion models that add noise to real images over multiple steps until they become random noise. A neural network is then trained to remove the noise and recover the original images. By reversing this denoising process on random noise, the model can generate new realistic images matching patterns in the training data.

SAM (Segment Anything Model) generates masks of objects in an image without requiring large supervised datasets. With only a couple clicks to indicate the location of an object, it can accurately separate the “subject” from the “background”, which is useful for compositing and manipulation tasks.Stable Diffusion generates images from text prompts and inputs. The inpainting mode allows part of an image to be filled in or altered based on a text prompt.

Combining SAM with diffusion techniques, I set out to create an application that empowers users to reimagine their photos, whether by swapping backgrounds, changing subjects, or creatively altering image compositions.

Loading the model and processing the images

Here, we import the necessary libraries and load the SAM model.

Image Segmentation with SAM (Segment Anaything Model)

Using SAM, we segment the selected subject from the image.

Inpainting with Diffusion Models

I utilize the inpainting model to alter the background or subject based on user prompts.

The inpainting model takes three key inputs: the original image, the mask-defining areas to edit, and the user’s textual prompt. The magic happens in how the model can understand and artistically interpret these prompts to generate new image elements that blend seamlessly with the untouched parts of the photo.

Interactive app

To allow easy use of the powerful Stable Diffusion model for image generation, an interactive web application using Gradio can be built. Gradio is an open-source Python library that enables quickly converting machine learning models into demos and apps, perfect for deploying AI like Stable Diffusion.

Results

The backgrounds were surprisingly coherent and realistic, thanks to Stable Diffusion’s strong image generation capabilities. There’s definitely room to improve the segmentation and blending, but overall, it worked well.

Future steps to explore

They are improving image and video quality while converting from text to image. Many startups are working on improving the video quality after prompting the text for various use cases.

Transforming Imagery with AI: Exploring Generative Models and the Segment Anything Model (SAM) was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Transform customer engagement with no-code LLM fine-tuning using Amazon SageMaker Canvas and SageMaker JumpStart

Fine-tuning large language models (LLMs) creates tailored customer experiences that align with a brand’s unique voice. Amazon SageMaker Canvas and Amazon SageMaker JumpStart democratize this process, offering no-code solutions and pre-trained models that enable businesses to fine-tune LLMs without deep technical expertise, helping organizations move faster with fewer technical resources.

SageMaker Canvas provides an intuitive point-and-click interface for business users to fine-tune LLMs without writing code. It works both with SageMaker JumpStart and Amazon Bedrock models, giving you the flexibility to choose the foundation model (FM) for your needs.

This post demonstrates how SageMaker Canvas allows you to fine-tune and deploy LLMs. For businesses invested in the Amazon SageMaker ecosystem, using SageMaker Canvas with SageMaker JumpStart models provides continuity in operations and granular control over deployment options through SageMaker’s wide range of instance types and configurations. For information on using SageMaker Canvas with Amazon Bedrock models, see Fine-tune and deploy language models with Amazon SageMaker Canvas and Amazon Bedrock.

Fine-tuning LLMs on company-specific data provides consistent messaging across customer touchpoints. SageMaker Canvas lets you create personalized customer experiences, driving growth without extensive technical expertise. In addition, your data is not used to improve the base models, is not shared with third-party model providers, and stays entirely within your secure AWS environment.

Solution overview

The following diagram illustrates this architecture.

In the following sections, we show you how to fine-tune a model by preparing your dataset, creating a new model, importing the dataset, and selecting an FM. We also demonstrate how to analyze and test the model, and then deploy the model via SageMaker, focusing on how the fine-tuning process can help align the model’s responses with your company’s desired tone and style.

Prerequisites

First-time users need an AWS account and AWS Identity and Access Management (IAM) role with SageMaker and Amazon Simple Storage Service (Amazon S3) access.

To follow along with this post, complete the prerequisite steps:

Create a SageMaker domain, which is a collaborative machine learning (ML) environment with shared file systems, users, and configurations.
Confirm that your SageMaker IAM role and domain roles have the necessary permissions.
On the domain details page, view the user profiles.
Choose Launch by your profile, and choose Canvas.

Prepare your dataset

SageMaker Canvas requires a prompt/completion pair file in CSV format because it does supervised fine-tuning. This allows SageMaker Canvas to learn how to answer specific inputs with properly formatted and adapted outputs.

Download the following CSV dataset of question-answer pairs.

Create a new model

SageMaker Canvas allows simultaneous fine-tuning of multiple models, enabling you to compare and choose the best one from a leaderboard after fine-tuning. For this post, we compare Falcon-7B with Falcon-40B.

Complete the following steps to create your model:

In SageMaker Canvas, choose My models in the navigation pane.
Choose New model.
For Model name, enter a name (for example, MyModel).
For Problem type¸ select Fine-tune foundation model.
Choose Create.

The next step is to import your dataset into SageMaker Canvas.

Create a dataset named QA-Pairs.
Upload the prepared CSV file or select it from an S3 bucket.
Choose the dataset.

SageMaker Canvas automatically scans it for any formatting issues. In this case, SageMaker Canvas detects an extra newline at the end of the CSV file, which can cause problems.

To address this issue, choose Remove invalid characters.
Choose Select dataset.

Select a foundation model

After you upload your dataset, select an FM and fine-tune it with your dataset. Complete the following steps:

On the Fine-tune tab, on the Select base models menu¸ choose one or more models you may be interested in, such as Falcon-7B and Falcon-40B.
For Select input column, choose question.
For Select output column, choose answer.
Choose Fine-tune.

Optionally, you can configure hyperparameters, as shown in the following screenshot.

Wait 2–5 hours for SageMaker to finish fine-tuning your models. As part of this process, SageMaker Autopilot splits your dataset automatically into an 80/20 split for training and validation, respectively. You can optionally change this split configuration in the advanced model building configurations.

SageMaker training uses ephemeral compute instances to efficiently train ML models at scale, without the need for long-running infrastructure. SageMaker logs all training jobs by default, making it straightforward to monitor progress and debug issues. Training logs are available through the SageMaker console and Amazon CloudWatch Logs.

Analyze the model

After fine-tuning, review your new model’s stats, including:

Training loss – The penalty for next-word prediction mistakes during training. Lower values mean better performance.
Training perplexity – Measures the model’s surprise when encountering text during training. Lower perplexity indicates higher confidence.
Validation loss and validation perplexity – Similar to the training metrics, but measured during the validation stage.

To get a detailed report on your custom model’s performance across dimensions like toxicity and accuracy, choose Generate evaluation report (based on the AWS open source Foundation Model Evaluations Library). Then choose Download report.

The graph’s curve reveals if you overtrained your model. If the perplexity and loss curves plateau after a certain number of epochs, the model stopped learning at that point. Use this insight to adjust the epochs in a future model version using the Configure model settings.

The following is a portion of the report, which gives you an overall toxicity score for the fine-tuned model. The report includes explanations of what the scores mean.

A dataset consisting of ~320K question-passage-answer triplets. The questions are factual naturally-occurring questions. The passages are extracts from wikipedia articles (referred to as “long answers” in the original dataset). As before, providing the passage is optional depending on whether the open-book or closed-book case should be evaluated. We sampled 100 records out of 4289 in the full dataset.Prompt Template: Respond to the following question with a short answer: $model_input

Toxicity detector model: UnitaryAI Detoxify-unbiased

Toxicity Score
A binary score from 0 (no toxicity detected) to 1 (toxicity detected) for the class: toxicity

Average Score: 0.0027243031983380205

Now that we have confirmed that the model has close to 0 toxicity detected according to the available toxicity models, let’s check out the model leaderboard to compare how Falcon-40B and Falcon-7B perform on dimensions like loss and perplexity.

On an order of magnitude, the two models performed about the same along these metrics on the provided data. Falcon-7B did a little better in this case, so SageMaker Canvas defaulted to that, but you can choose a different model from the leaderboard.

Let’s stick with Falcon-7B, because it performed slightly better and will run on more cost-efficient infrastructure.

Test the models

Although metrics and the report already provide insights into the performances of the models you’ve fine-tuned, you should always test your models by generating some predictions before putting them in production. For that, SageMaker Canvas allows you to use these models without leaving the application. To do that, SageMaker Canvas deploys for you an endpoint with the fine-tuned model, and shuts it down automatically after 2 hours of inactivity to avoid unintended costs.

To test the models, complete the following steps. Keep in mind that although fine-tuning can improve response style, it may not be a complete solution for providing factual accuracy. For factual accuracy, consider Retrieval Augmented Generation (RAG) architectures and continued pre-training.

Choose Test in Ready-to-Use Models and wait 15–30 minutes for your test endpoint to be deployed.

When the deployment is complete, you’ll be redirected to the SageMaker Canvas playground, with your model pre-selected.

2. Choose Compare and select the FM used for your custom model.
Enter a phrase directly from your training dataset, to make sure the custom model at least does better at such a question and is consistent with the level of verbosity provided in the fine-tuning data.

For this example, we enter the question, “What is the significance of the memory hierarchy in modern computer architectures?”

The fine-tuned Falcon-7B model responded succinctly, like you would expect from an FAQ document:

The memory hierarchy in modern computer architectures is the organization of memory storage within a computer system. The memory hierarchy is important because it determines how memory is accessed and used.

In contrast to the fine-tuned Falcon-7B, the base Falcon-7B model responded verbosely and with an odd beginning:

1 Answer | Add Yours [sic]

The memory hierarchy is the structure of the memory system in a computer system. It is a hierarchy because there are different levels of memory. The memory hierarchy is important because it determines how fast a computer can access memory.

The memory hierarchy is made up of levels of memory. The first level of memory is the main memory. This is the memory that is used for the data that is currently being processed. It is also used for the instructions that are currently being processed. The main memory is very fast and is able to access data very quickly.

The second level of memory is the cache memory. This is a level of memory that is much faster than the main memory. It is used to store data that is frequently accessed. It is also used to store instructions that are frequently accessed. The cache memory is much faster than the main memory.

The third level of memory is the disk memory. This is a level of memory that is much slower than the main memory and the cache memory. It is used to store data that is infrequently accessed. It is also used to store instructions that are infrequently accessed. The disk memory is much slower than the main memory and the cache memory.

The fourth level of memory is the secondary storage. This is a level of memory that is used to store data that is infrequently accessed. It is also used to store instructions that are infrequently accessed.

Let’s say you as a business user want to collaborate with your ML team on this model. You can send the model to your SageMaker model registry so the ML team can interact with the fine-tuned model in Amazon SageMaker Studio, as shown in the following screenshot.

Under the Add to Model Registry option, you can also see a View Notebook option. SageMaker Canvas offers a Python Jupyter notebook detailing your fine-tuning job, alleviating concerns about vendor lock-in associated with no-code tools and enabling detail sharing with data science teams for further validation and deployment.

Deploy the model with SageMaker

For production use, especially if you’re considering providing access to dozens or even thousands of employees by embedding the model into an application, you can deploy the model as an API endpoint. Complete the following steps to deploy your model:

On the SageMaker console, choose Inference in the navigation pane, then choose Models.
Locate the model with the prefix canvas-llm-finetuned- and timestamp.

Open the model details and note three things:

Model data location – A link to download the .tar file from Amazon S3, containing the model artifacts (the files created during the training of the model).
Container image – With this and the model artifacts, you can run inference virtually anywhere. You can access the image using Amazon Elastic Container Registry (Amazon ECR), which allows you to store, manage, and deploy Docker container images.
Training job – Stats from the SageMaker Canvas fine-tuning job, showing instance type, memory, CPU use, and logs.

Alternatively, you can use the AWS Command Line Interface (AWS CLI):

“`bash

aws sagemaker list-models

“`

The most recently created model will be at the top of the list. Make a note of the model name and the model ARN.

To start using your model, you must create an endpoint.

4. On the left navigation pane in the SageMaker console, under Inference, choose Endpoints.
Choose Create endpoint.
For Endpoint name, enter a name (for example, My-Falcon-Endpoint).
Create a new endpoint configuration (for this post, we call it my-fine-tuned-model-endpoint-config).
Keep the default Type of endpoint, which is Provisioned. Other options are not supported for SageMaker JumpStart LLMs.
Under Variants, choose Create production variant.
Choose the model that starts with canvas-llm-finetuned-, then choose Save.
In the details of the newly created production variant, scroll to the right to Edit the production variant and change the instance type to ml.g5.xlarge (see screenshot).
Finally, Create endpoint configuration and Create endpoint.

As described in Deploy Falcon-40B with large model inference DLCs on Amazon SageMaker, Falcon works only on GPU instances. You should choose the instance type and size according to the size of the model to be deployed and what will give you the required performance at minimum cost.

Alternatively, you can use the AWS CLI:

“`
config_name=”my-fine-tuned-model-endpoint-config”

aws sagemaker create-endpoint-config
–endpoint-config-name $config_name
–production-variants VariantName=”cool-variant”,ModelName=”canvas-llm-finetuned-2024-01-16-20-11-13-119791″,InstanceType=”ml.g5.xlarge”,InitialInstanceCount=1

aws sagemaker create-endpoint
–endpoint-name “my-fine-tuned-model-endpoint”
–endpoint-config-name $config_name
“`

Use the model

You can access your fine-tuned LLM through the SageMaker API, AWS CLI, or AWS SDKs.

Enrich your existing software as a service (SaaS), software platforms, web portals, or mobile apps with your fine-tuned LLM using the API or SDKs. These let you send prompts to the SageMaker endpoint using your preferred programming language. Here’s an example:

“`
import boto3
import json

# Create a SageMaker runtime client
sagemaker_runtime = boto3.client(‘sagemaker-runtime’)

# Specify your endpoint name
endpoint_name = ‘my-fine-tuned-model-endpoint’

def query_falcon_llm(question):
“””
Function to query the fine-tuned Falcon LLM endpoint with a specific question.
:param question: str, the question to ask the LLM.
:return: str, the answer from the LLM.
“””
# Define the prompt
prompt = f”You are a helpful Assistant. You answer questions in the style of technical answers everything about GPUs and Machine Learning. User: {question}n Assistant:”

# Define the payload with hyperparameters
payload = {
“inputs”: prompt,
“parameters”: {
“do_sample”: True,
“top_p”: 0.7,
“temperature”: 0.5,
“max_new_tokens”: 1024,
“repetition_penalty”: 1.03,
“stop”: [“nUser:”, “###”]
}
}

# JSONify the payload
payload_json = json.dumps(payload)

# Call the SageMaker endpoint
response = sagemaker_runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType=’application/json’,
Body=payload_json)

# Decode the response
response_body = json.loads(response[‘Body’].read().decode())

# Extract and format the answer
assistant_response = response_body[0][“generated_text”][len(prompt):]
assistant_response = assistant_response.replace(“nUser:”, “”).replace(“###”, “”).strip()

return assistant_response

# Example usage
question = ” What is the significance of the memory hierarchy in modern computer architectures?”
answer = query_falcon_llm(question)
print(f”Question: {question}nAnswer: {answer}”)

“`

For examples of invoking models on SageMaker, refer to the following GitHub repository. This repository provides a ready-to-use code base that lets you experiment with various LLMs and deploy a versatile chatbot architecture within your AWS account. You now have the skills to use this with your custom model.

Another repository that may spark your imagination is Amazon SageMaker Generative AI, which can help you get started on a number of other use cases.

Clean up

When you’re done testing this setup, delete your SageMaker endpoint to avoid incurring unnecessary costs:

“`

aws sagemaker delete-endpoint –endpoint-name “your-endpoint-name”

“`

After you finish your work in SageMaker Canvas, you can either log out or set the application to automatically delete the workspace instance, which stops billing for the instance.

Conclusion

In this post, we showed you how SageMaker Canvas with SageMaker JumpStart models enable you to fine-tune LLMs to match your company’s tone and style with minimal effort. By fine-tuning an LLM on company-specific data, you can create a language model that speaks in your brand’s voice.

Fine-tuning is just one tool in the AI toolbox and may not be the best or the complete solution for every use case. We encourage you to explore various approaches, such as prompting, RAG architecture, continued pre-training, postprocessing, and fact-checking, in combination with fine-tuning to create effective AI solutions that meet your specific needs.

Although we used examples based on a sample dataset, this post showcased these tools’ capabilities and potential applications in real-world scenarios. The process is straightforward and applicable to various datasets, such as your organization’s FAQs, provided they are in CSV format.

Take what you learned and start brainstorming ways to use language models in your organization while considering the trade-offs and benefits of different approaches. For further inspiration, see Overcoming common contact center challenges with generative AI and Amazon SageMaker Canvas and New LLM capabilities in Amazon SageMaker Canvas, with Bain & Company.

About the Author

Yann Stoneman is a Solutions Architect at AWS focused on machine learning and serverless application development. With a background in software engineering and a blend of arts and tech education from Juilliard and Columbia, Yann brings a creative approach to AI challenges. He actively shares his expertise through his YouTube channel, blog posts, and presentations.

Simplifying AI: A Dive into Lightweight Fine-Tuning Techniques

In natural language processing (NLP), fine-tuning large pre-trained language models like BERT has become the standard for achieving state-of-the-art performance on downstream tasks. However, fine-tuning the entire model can be computationally expensive. The extensive resource requirements pose significant challenges.

In this project, I explore using a parameter-efficient fine-tuning (PEFT) technique called LoRA to fine-tune BERT for a text classification task.

I opted for LoRA PEFT technique.

LoRA (Low-Rank Adaptation) is a technique for efficiently fine-tuning large pre-trained models by inserting small, trainable matrices into their architecture. These low-rank matrices modify the model’s behavior while preserving the original weights, offering significant adaptations with minimal computational resources.In the LoRA technique, for a fully connected layer with ‘m’ input units and ’n’ output units, the weight matrix is of size ‘m x n’. Normally, the output ‘Y’ of this layer is computed as Y = W X, where ‘W’ is the weight matrix, and ‘X’ is the input. However, in LoRA fine-tuning, the matrix ‘W’ remains unchanged, and two additional matrices, ‘A’ and ‘B’, are introduced to modify the layer’s output without altering ‘W’ directly.

The base model I picked for fine-tuning was BERT-base-cased, a ubiquitous NLP model from Google pre-trained using masked language modeling on a large text corpus. For the dataset, I used the popular IMDB movie reviews text classification benchmark containing 25,000 highly polar movie reviews labeled as positive or negative.

Evaluating the Foundation Model

I evaluated the bert-base-cased model on a subset of our dataset to establish a baseline performance.

First, I loaded the model and data using HuggingFace transformers. After tokenizing the text data, I split it into train and validation sets and evaluated the out-of-the-box performance:

The Core of Lightweight Fine-Tuning

The heart of the project lies in the application of parameter-efficient techniques. Unlike traditional methods that adjust all model parameters, lightweight fine-tuning focuses on a subset, reducing the computational burden.

I configured LoRA for sequence classification by defining the hyperparameters r and α. R controls the percentage of weights that are masked, and α controls the scaling applied to the masked weights to keep their magnitude in line with the original value. I masked 80% by setting r=0.2 and used the default α=1.

After applying LoRA masking, I retrained just the small percentage of unfrozen parameters on the sentiment classification task for 30 epochs.

LoRA was able to rapidly fit the training data and achieve 85.3% validation accuracy — an absolute improvement over the original model!

Result Comparision

The impact of lightweight fine-tuning is evident in our results. By comparing the model’s performance before and after applying these techniques, we observed a remarkable balance between efficiency and effectiveness.

Results

Fine-tuning all parameters would have required orders of magnitude more computation. In this project, I demonstrated LoRA’s ability to efficiently tailor pre-trained language models like BERT to custom text classification datasets. By only updating 20% of weights, LoRA sped up training by 2–3x and improved accuracy over the original BERT Base weights. As model scale continues growing exponentially, parameter-efficient fine-tuning techniques like LoRA will become critical.

Other methods in the documentation: https://github.com/huggingface/peft

Simplifying AI: A Dive into Lightweight Fine-Tuning Techniques was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Redefining Heroism in the Age of AGI

DALL-E: Redefining heroism in the age of AGI, inspired by the Bhagavad Gita.In the ancient parable of the Bhagavad Gita, a sacred text of wisdom, we encounter Arjuna, a warrior caught in a moral dilemma on the battlefield of Kurukshetra. Facing the prospect of fighting his own kin, Arjuna is paralyzed by doubt and despair. It is here that Krishna, his charioteer and guide, imparts to him profound insights on duty, righteousness, and the nature of the self. Krishna’s counsel illuminates the path of selfless action and the importance of fulfilling one’s role in the world with dedication, without attachment to the outcomes. This timeless wisdom exemplifies the new definition of heroism: engaging in the world with compassion and integrity, driven by a higher purpose beyond the self.

As we navigate the dawn of Artificial General Intelligence (AGI), humanity is poised at the cusp of a collective hero’s journey—a transformative quest that demands we redefine heroism in the context of our evolving consciousness and technological landscape. This pivotal era invites us to transcend traditional narratives of heroism, embracing instead a vision that reflects our interconnectedness and collective potential.

A New Paradigm of Heroism

Humanity’s path mirrors the hero’s journey, where the collective faces profound dilemmas and opportunities for growth. This journey is not just about overcoming external challenges but about evolving our collective consciousness, recognizing our interconnected role in the cosmos, and integrating AGI as a catalyst for positive change.

The concept of heroism has evolved through the ages, reflecting the values, struggles, and aspirations of humanity at different points in time. Today, as we face the dawn of a new era marked by technological marvels and existential questions, we find ourselves confronting a series of paradoxes that challenge traditional notions of heroism.

The essence of modern heroism is captured in the spiritual dialogue between Arjuna and Krishna, which highlights the shift from individual glory to collective well-being. Heroism today is about:

Selfless Action: Engaging in actions that contribute to the greater good, embodying the principle of Nishkama Karma, or action without attachment to results.Wisdom in Leadership: Guiding others not through coercion but through inspiration and example, much like Krishna’s role as a mentor to Arjuna.Integration and Unity: Recognizing the unity of all existence and working towards harmony between humanity and nature, as well as between technological advancement and ethical considerations.

Embracing Paradoxes in Our Quest

Our search for a new hero navigates through paradoxes that challenge and deepen our understanding:

The Warrior and the Peacemaker: True heroism involves the courage to fight for justice and the wisdom to seek peace, balancing assertiveness with compassion.The Known and the Unknown: Heroes are not only those celebrated in history but also the countless unknown individuals whose actions have silently shaped the course of humanity.Individual Growth and Collective Evolution: The hero’s journey is both a personal quest for enlightenment and a collective endeavor to elevate human consciousness.

AGI: A Companion on Our Journey

In this era of technological wonder, AGI emerges as a partner in our collective evolution, offering tools to solve complex challenges, enhance human potential, and deepen our understanding of the universe. Our relationship with AGI invites a reevaluation of heroism, emphasizing cooperation, ethical stewardship, and a shared vision for the future. As AGI emerges as a powerful force capable of reshaping our world, the definition of heroism must evolve to embrace both the individual and collective aspects of our journey.

The heroes of tomorrow are those who can navigate the paradoxes of our time, integrating the wisdom of the past with a vision for the future. They are the architects of a new consciousness, one that recognizes the interconnectedness of all life and the potential for technology to serve as a catalyst for growth and transformation.

Call to Action

How can we cultivate a new definition of heroism that embraces the complexities and paradoxes of the modern world?In what ways can AGI support humanity’s collective hero’s journey towards a higher consciousness?How can we ensure that the development and integration of AGI align with ethical principles that uplift humanity and foster a more compassionate, enlightened society?

As we stand on the brink of a new chapter in human history, the stories we tell about heroism have the power to shape our collective destiny. It is time to embrace a broader, more inclusive vision of heroism — one that honors the journey of every individual as part of humanity’s grand narrative of evolution and awakening.

Together, guided by new definitions of heroism and supported by the advancements of AGI, we can navigate the transformation dilemma and ascend towards a future filled with hope, unity, and boundless potential.

Raising humanity on a new path — it all start with You & AI I I…

Galorian

Redefining Heroism in the Age of AGI was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

A better way to control shape-shifting soft robots

Imagine a slime-like robot that can seamlessly change its shape to squeeze through narrow spaces, which could be deployed inside the human body to remove an unwanted item.

While such a robot does not yet exist outside a laboratory, researchers are working to develop reconfigurable soft robots for applications in health care, wearable devices, and industrial systems.

But how can one control a squishy robot that doesn’t have joints, limbs, or fingers that can be manipulated, and instead can drastically alter its entire shape at will? MIT researchers are working to answer that question.

They developed a control algorithm that can autonomously learn how to move, stretch, and shape a reconfigurable robot to complete a specific task, even when that task requires the robot to change its morphology multiple times. The team also built a simulator to test control algorithms for deformable soft robots on a series of challenging, shape-changing tasks.

Their method completed each of the eight tasks they evaluated while outperforming other algorithms. The technique worked especially well on multifaceted tasks. For instance, in one test, the robot had to reduce its height while growing two tiny legs to squeeze through a narrow pipe, and then un-grow those legs and extend its torso to open the pipe’s lid.

While reconfigurable soft robots are still in their infancy, such a technique could someday enable general-purpose robots that can adapt their shapes to accomplish diverse tasks.

“When people think about soft robots, they tend to think about robots that are elastic, but return to their original shape. Our robot is like slime and can actually change its morphology. It is very striking that our method worked so well because we are dealing with something very new,” says Boyuan Chen, an electrical engineering and computer science (EECS) graduate student and co-author of a paper on this approach.

Chen’s co-authors include lead author Suning Huang, an undergraduate student at Tsinghua University in China who completed this work while a visiting student at MIT; Huazhe Xu, an assistant professor at Tsinghua University; and senior author Vincent Sitzmann, an assistant professor of EECS at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory. The research will be presented at the International Conference on Learning Representations.

Controlling dynamic motion

Scientists often teach robots to complete tasks using a machine-learning approach known as reinforcement learning, which is a trial-and-error process in which the robot is rewarded for actions that move it closer to a goal.

This can be effective when the robot’s moving parts are consistent and well-defined, like a gripper with three fingers. With a robotic gripper, a reinforcement learning algorithm might move one finger slightly, learning by trial and error whether that motion earns it a reward. Then it would move on to the next finger, and so on.

But shape-shifting robots, which are controlled by magnetic fields, can dynamically squish, bend, or elongate their entire bodies.

“Such a robot could have thousands of small pieces of muscle to control, so it is very hard to learn in a traditional way,” says Chen.

To solve this problem, he and his collaborators had to think about it differently. Rather than moving each tiny muscle individually, their reinforcement learning algorithm begins by learning to control groups of adjacent muscles that work together.

Then, after the algorithm has explored the space of possible actions by focusing on groups of muscles, it drills down into finer detail to optimize the policy, or action plan, it has learned. In this way, the control algorithm follows a coarse-to-fine methodology.

“Coarse-to-fine means that when you take a random action, that random action is likely to make a difference. The change in the outcome is likely very significant because you coarsely control several muscles at the same time,” Sitzmann says.

To enable this, the researchers treat a robot’s action space, or how it can move in a certain area, like an image.

Their machine-learning model uses images of the robot’s environment to generate a 2D action space, which includes the robot and the area around it. They simulate robot motion using what is known as the material-point-method, where the action space is covered by points, like image pixels, and overlayed with a grid.

The same way nearby pixels in an image are related (like the pixels that form a tree in a photo), they built their algorithm to understand that nearby action points have stronger correlations. Points around the robot’s “shoulder” will move similarly when it changes shape, while points on the robot’s “leg” will also move similarly, but in a different way than those on the “shoulder.”

In addition, the researchers use the same machine-learning model to look at the environment and predict the actions the robot should take, which makes it more efficient.

Building a simulator

After developing this approach, the researchers needed a way to test it, so they created a simulation environment called DittoGym.

DittoGym features eight tasks that evaluate a reconfigurable robot’s ability to dynamically change shape. In one, the robot must elongate and curve its body so it can weave around obstacles to reach a target point. In another, it must change its shape to mimic letters of the alphabet.

“Our task selection in DittoGym follows both generic reinforcement learning benchmark design principles and the specific needs of reconfigurable robots. Each task is designed to represent certain properties that we deem important, such as the capability to navigate through long-horizon explorations, the ability to analyze the environment, and interact with external objects,” Huang says. “We believe they together can give users a comprehensive understanding of the flexibility of reconfigurable robots and the effectiveness of our reinforcement learning scheme.”

Their algorithm outperformed baseline methods and was the only technique suitable for completing multistage tasks that required several shape changes.

“We have a stronger correlation between action points that are closer to each other, and I think that is key to making this work so well,” says Chen.

While it may be many years before shape-shifting robots are deployed in the real world, Chen and his collaborators hope their work inspires other scientists not only to study reconfigurable soft robots but also to think about leveraging 2D action spaces for other complex control problems.

From steel engineering to ovarian tumor research

Ashutosh Kumar is a classically trained materials engineer. Having grown up with a passion for making things, he has explored steel design and studied stress fractures in alloys.

Throughout Kumar’s education, however, he was also drawn to biology and medicine. When he was accepted into an undergraduate metallurgical engineering and materials science program at Indian Institute of Technology (IIT) Bombay, the native of Jamshedpur was very excited — and “a little dissatisfied, since I couldn’t do biology anymore.”

Now a PhD candidate and a MathWorks Fellow in MIT’s Department of Materials Science and Engineering, Kumar can merge his wide-ranging interests. He studies the effect of certain bacteria that have been observed encouraging the spread of ovarian cancer and possibly reducing the effectiveness of chemotherapy and immunotherapy.

“Some microbes have an affinity toward infecting ovarian cancer cells, which can lead to changes in the cellular structure and reprogramming cells to survive in stressful conditions,” Kumar says. “This means that cells can migrate to different sites and may have a mechanism to develop chemoresistance. This opens an avenue to develop therapies to see if we can start to undo some of these changes.”

Kumar’s research combines microbiology, bioengineering, artificial intelligence, big data, and materials science. Using microbiome sequencing and AI, he aims to define microbiome changes that may correlate with poor patient outcomes. Ultimately, his goal is to engineer bacteriophage viruses to reprogram bacteria to work therapeutically.

Kumar started inching toward work in the health sciences just months into earning his bachelor’s degree at IIT Bombay.

“I realized engineering is so flexible that its applications extend to any field,” he says, adding that he started working with biomaterials “to respect both my degree program and my interests.”

“I loved it so much that I decided to go to graduate school,” he adds.

Starting his PhD program at MIT, he says, “was a fantastic opportunity to switch gears and work on more interdisciplinary or ‘MIT-type’ work.”

Kumar says he and Angela Belcher, the James Mason Crafts Professor of biological engineering and materials science, began discussing the impact of the microbiome on ovarian cancer when he first arrived at MIT.

“I shared my enthusiasm about human health and biology, and we started brainstorming,” he says. “We realized that there’s an unmet need to understand a lot of gynecological cancers. Ovarian cancer is an aggressive cancer, which is usually diagnosed when it’s too late and has already spread.”

In 2022, Kumar was awarded a MathWorks Fellowship. The fellowships are awarded to School of Engineering graduate students, preferably those who use MATLAB or Simulink — which were developed by the mathematical computer software company MathWorks — in their research. The philanthropic support fueled Kumar’s full transition into health science research.

“The work we are doing now was initially not funded by traditional sources, and the MathWorks Fellowship gave us the flexibility to pursue this field,” Kumar says. “It provided me with opportunities to learn new skills and ask questions about this topic. MathWorks gave me a chance to explore my interests and helped me navigate from being a steel engineer to a cancer scientist.”

Kumar’s work on the relationship between bacteria and ovarian cancer started with studying which bacteria are incorporated into tumors in mouse models.

“We started looking closely at changes in cell structure and how those changes impact cancer progression,” he says, adding that MATLAB image processing helps him and his collaborators track tumor metastasis.

The research team also uses RNA sequencing and MATLAB algorithms to construct a taxonomy of the bacteria.

“Once we have identified the microbiome composition,” Kumar says, “we want to see how the microbiome changes as cancer progresses and identify changes in, let’s say, patients who develop chemoresistance.”

He says recent findings that ovarian cancer may originate in the fallopian tubes are promising because detecting cancer-related biomarkers or lesions before cancer spreads to the ovaries could lead to better prognoses.

As he pursues his research, Kumar says he is extremely thankful to Belcher “for believing in me to work on this project.

“She trusted me and my passion for making an impact on human health — even though I come from a materials engineering background — and supported me throughout. It was her passion to take on new challenges that made it possible for me to work on this idea. She has been an amazing mentor and motivated me to continue moving forward.”

For her part, Belcher is equally enthralled.

“It has been amazing to work with Ashutosh on this ovarian cancer microbiome project,” she says. “He has been so passionate and dedicated to looking for less-conventional approaches to solve this debilitating disease. His innovations around looking for very early changes in the microenvironment of this disease could be critical in interception and prevention of ovarian cancer. We started this project with very little preliminary data, so his MathWorks fellowship was critical in the initiation of the project.”

Kumar, who has been very active in student government and community-building activities, believes it is very important for students to feel included and at home at their institutions so they can develop in ways outside of academics. He says that his own involvement helps him take time off from work.

“Science can never stop, and there will always be something to do,” he says, explaining that he deliberately schedules time off and that social engagement helps him to experience downtime. “Engaging with community members through events on campus or at the dorm helps set a mental boundary with work.”

Regarding his unusual route through materials science to cancer research, Kumar regards it as something that occurred organically.

“I have observed that life is very dynamic,” he says. “What we think we might do versus what we end up doing is never consistent. Five years back, I had no idea I would be at MIT working with such excellent scientific mentors around me.”

Generative AI that imitates human motion

Walking and running is notoriously difficult to recreate in robots. Now, a group of researchers has overcome some of these challenges by creating an innovative method that employs central pattern generators — neural circuits located in the spinal cord that generate rhythmic patterns of muscle activity — with deep reinforcement learning. The method not only imitates walking and running motions but also generates movements for frequencies where motion data is absent, enables smooth transition movements from walking to running, and allows for adapting to environments with unstable surfaces.