Skip to main content

The Intersection of Chemistry and AI: The Story of Darnell Granberry

By Jas Mehta

In the ever-evolving landscape of science and technology, few stories are as compelling as that of Darnell Granberry, a machine-learning (ML) engineer at the New York Structural Biology Center. His journey from a passionate high school student to a pioneering figure at the intersection of chemistry and artificial intelligence (AI) exemplifies the transformative power of education, mentorship, and relentless curiosity.

Darnell’s fascination with chemistry began in middle school and blossomed during his high school years. His enthusiasm for the subject was evident when he excelled in Advanced Placement (AP) Chemistry as a sophomore. Reflecting on this period, he shared, “I wanted to take AP Chemistry my sophomore year instead of my junior year and was able to do that. I excelled in the class and loved it.” This early exposure to the intricacies of chemical reactions and molecular structures ignited a passion that would shape his future career.

Upon entering college, Darnell initially considered material science but soon found his true calling in chemistry. His interest deepened after taking organic chemistry, a subject that captivated him with its exploration of reaction mechanisms and the fundamental beauty of chemical interactions. This pivotal experience led him to switch his major to chemistry, setting the stage for his future endeavors.

While pursuing his undergraduate degree, Darnell was introduced to the power of computer science through a course on computational structures. This course, which involved building a microprocessor from the ground up, opened his eyes to the immense potential of computational tools in solving complex problems. He was particularly struck by the efficiency and precision of computers in handling intricate calculations, a realization that would influence his future research.

Darnell’s academic journey at the Massachusetts Institute of Technology (MIT) provided a unique opportunity to merge his interests in chemistry and computer science. He took various computational science courses, including computational neuroscience and computational physics, which broadened his understanding of how computational techniques could be applied across different scientific disciplines.

One of the most significant milestones in Darnell’s career was his involvement in AI-driven research. He participated in an internship at the Memorial Sloan Kettering Cancer Center, where he worked on active learning and neural networks to mimic the decision-making processes of a team of scientists in drug discovery. This experience highlighted the potential of AI to revolutionize the field by improving decision-making and efficiency.

Darnell’s work in this area involved using machine learning to predict the properties of molecules and proteins. (He was featured in a previous story on the roles of deep learning in accelerating protein-folding prediction.) Despite the challenges, his efforts underscored the transformative potential of AI in accelerating drug discovery and developing new therapeutics. The integration of AI in chemistry, particularly through generative modeling and active learning, demonstrated how these technologies could address some of the most pressing challenges in medicine.

In discussing the advancements in his field, Darnell emphasized the exponential growth of computational power, particularly in the development of graphics processing units (GPUs) and supercomputers. He mentioned, “I think the increase in computing power has been the most important advancement. The development of GPUs and supercomputers has made the research move a lot faster.” This increased computational capacity has been instrumental in advancing AI research, making it possible to tackle more complex problems and achieve breakthroughs at an unprecedented pace.

Darnell’s story is a testament to the power of passion, mentorship, and the relentless pursuit of knowledge. His journey from a curious student to a leading figure at the intersection of chemistry and AI serves as an inspiration to future generations of scientists, demonstrating that with the right support and determination, the possibilities are limitless.

Get Involved

Contact the Midwest Big Data Innovation Hub if you’re aware of other people or projects we should profile here, or to participate in any of our community-led Priority Areas. The MBDH has a variety of ways to get involved with our community and activities. The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.

Deep Learning Engineer Ali Taghibakhshi and the Magic of Text-to-Image AI Generation

By Ken Ogata

Artificial intelligence (AI), a field where the boundaries of imagination are constantly being pushed, is witnessing remarkable advances in language and vision in ways that were unimaginable just a few years ago. At the forefront of this technological revolution is Ali Taghibakhshi, a Deep Learning Algorithm Engineer at NVIDIA, whose work epitomizes the blending of these realms.

Taghibakhshi works as a Deep Learning Algorithm Engineer at NVIDIA, which he describes as being a mixture of research and engineering centered around large-scale generative vision and language models. In the vocabulary of AI and machine learning, Taghibakhshi primarily works with large language models (LLMs) and multimodal Generative AI models. In other words, he helps create methods for machine-learning models to generate accurate and high-quality images based on a text input.

While text-to-image can be a harder concept to grasp than text-to-text, Taghibakhshi states that the machine-learning methods used for text-to-image models are not so different.

“The components are the same for both,” Taghibakhshi said. “They all use transformer architecture that have been revolutionizing the field since their introduction in 2017. Although [text-to-text and text-to-image] are different modalities, they still have a lot of things in common. Essentially, you’re combining these two modalities and they have to be in the same space.”

At NVIDIA, Taghibakhshi works on projects such as NeMo, a platform that allows individuals to develop custom, pretrained generative AI, ranging from language to vision and speech models. Taghibakhshi is currently working on methods for fine-tuning text-to-image diffusion models to ensure more accurate image generation. (For more information, Taghibakhshi summarizes his team’s research in this NVIDIA Developer blog).

NeMo follows in the footsteps of previous image generation diffusion models created by NVIDIA, namely GauGAN, a model that allowed individuals to draw simple blobs on a screen to which the model would output a high-fidelity, picturesque landscape based on the user’s input. The second version, GauGAN2, had a text-to-image feature, adorned with the ability to turn simple phrases such as “misty mountains covered in snow” or “sunset at rocky beach” into photorealistic images in real time. According to the creators of GauGAN, the model was named after the French post-impressionist painter Paul Gauguin.

Despite the exponential growth of AI and machine learning in recent years, there still remains a great white whale that Taghibakhshi and other deep-learning engineers continue to pursue: allowing AI to think out of the box.

“These models are good at interpolation. We provide all the data within a circle, and it learns that circle pretty well. However, [these models] can’t extrapolate. This isn’t limited to any certain models, but all machine-learning models in general,” Taghibakhshi said. “If you only train it on cat images, it’s never going to generate a horse or something like that.”

In January, Google published a paper in Nature to introduce AlphaGeometry, an AI model that can solve geometry problems at the level of an International Mathematical Olympiad gold medalist. While models such as these may seem like they are thinking outside the box, Taghibakhshi explains that it is still far from it.

“It’s really impressive, but Mathematical Olympiad questions and their solutions are known, and it has been trained on thousands and thousands of problems. [AlphaGeometry] cannot solve unsolved problems in mathematics yet because again, they’re really good at interpolation and not extrapolation,” Taghibakhshi said.

The potential of AI to begin thinking outside the box and even surpass human intelligence is what many call “technological singularity”—a hypothetical point in time in the near future when technological growth becomes uncontrollable, whether that be to the benefit or detriment of civilization.

“Things are moving super fast. For example, I was reading a paper and we were trying to prove it, and then the next week, another paper with the same idea had already come out. Taghibakhshi said. “The window is getting smaller and smaller for AIs to surpass human ability and we get the AGI that OpenAI is after.”

The “AGI” that Taghibakhshi mentions is short for artificial general intelligence, a type of AI that will perform cognitive tasks at a human level or better. It remains up to debate whether AGI could pose an existential threat to humanity.

“Not only is AI improving, but computing power is increasing every single day as well. So there’s a lot of things that promote each other,” Taghibakhshi said. “If you consider the videos that OpenAI’s Sora generated recently, versus the videos that were generated just one year ago, it’s amazing how different they are. Again, all these things are only five, six years old.”

While AI researchers estimate that AGI will be achieved by 2050, there are still many sectors of life that AI is influencing today, even in its solely interpolation form. One of the most controversial topics surrounding AI today is its implications in the realm of art. While Taghibakhshi agrees that AI will have a significant effect on human artists, he doesn’t believe that artists will be replaced completely.

“I think [AI] will change the nature of how artists work. Maybe they [use AI] to narrow down to a certain style or ask it to redefine their work,” Taghibakhshi said. “I don’t think it will completely take away all artists. You don’t want a robot to start playing guitar for you.”

As we venture deeper into the terra incognita of the AI world, it remains up to debate whether the pursuit for AGI and a superintelligent machine-learning model will benefit humanity or sink all of us down with it. However, even after years of working with machine learning and mathematics, Ali Taghibakhshi’s sense of awe towards AI remains unclouded.

“Even though it’s stapled to the Earth and I know how these diffusion and language models work, it is still amazing. It doesn’t matter how much you understand these things. It’s still super magical to me.”

Get Involved

Contact the Midwest Big Data Innovation Hub if you’re aware of other people or projects we should profile here, or to participate in any of our community-led Priority Areas. The MBDH has a variety of ways to get involved with our community and activities. The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.

Using Deep Learning to Accelerate Protein-Folding Prediction

By Jas Mehta

For decades, a fundamental question in biology remained largely unanswered: how do proteins fold? Proteins, large, complex molecules, play crucial roles in virtually every biological process within our cells. These building blocks, the workhorses of our cells, contort their amino acid chains into intricate 3D shapes that dictate their function. Unveiling these structures has been a slow and expensive endeavor, hindering progress in medicine, drug discovery, and our understanding of life itself.

Researchers have grappled with the challenge of deciphering protein structures using methods such as X-ray crystallography and computational modeling. However, these approaches often fell short in terms of accuracy and efficiency. Scientists and software developers using artificial intelligence (AI) concepts are creating powerful new tools to address this challenge. One example, called AlphaFold, was developed by the DeepMind subsidiary of Google’s parent company, Alphabet. AlphaFold represents a paradigm shift in protein structure prediction, building upon decades of research engaging with the intricate puzzle of protein folding, and leveraging the power of deep learning to achieve near-atomic accuracy in predicting 3D protein structures from amino acid sequences. (See the image below for an example of how researchers are computing protein structure from amino acid sequence data.)

Computing protein structure from amino acid sequence


This breakthrough has not only streamlined the process, reducing prediction times from months to minutes, but has also opened new avenues for drug discovery and biomedical research, promising to revolutionize our understanding of proteins and their functions within cells. This represents a monumental leap compared to traditional methods like X-ray crystallography, which can take months or even years. This breakthrough not only accelerates research cycles and slashes costs but also holds profound implications for fields ranging from medicine to materials science.

In the 14th Critical Assessment of Protein Structure Prediction (CASP), a biennial competition, AlphaFold achieved a staggering feat. It matched or surpassed the accuracy of experimental methods for a whopping 90% of proteins, showcasing the immense power of deep learning for this complex task. Historically, determining a protein structure could cost upwards of $100,000 and take months. AlphaFold slashes this time to minutes, with a projected cost per prediction of mere cents. This translates to significant cost savings and faster research cycles.

Designing drugs often hinges on knowing a protein’s structure. AlphaFold’s speed and accuracy streamline this process. A recent study used AlphaFold to identify a potential drug target for a baffling neurodegenerative disease, a process that would have taken significantly longer using traditional methods. Moving beyond snapshots, the next frontier is understanding how proteins fold, move, and interact within the cell. This will provide invaluable insights into cellular processes and protein function. Deep learning thrives on data. Integrating protein interaction databases, cellular environment data, and real-time folding kinetics will further enhance the accuracy and applicability of protein structure prediction. Open-source platforms like AlphaFold are making these powerful tools accessible to researchers worldwide. This fosters collaboration and accelerates scientific progress across disciplines.

The success of AlphaFold stands as a testament to the indispensable role played by the Protein Data Bank (PDB), a vast repository housing experimentally determined protein structures. Mr. Darnell Granberry, a distinguished machine-learning (ML) engineer at the New York Structural Biology Center, sheds light on the critical importance of open data in driving groundbreaking advancements in protein research. “The PDB contains nearly all of the protein structures that have been experimentally determined, and the fact that it’s open source is a major enabler of AlphaFold and other protein ML models,” remarks Mr. Granberry. “If we didn’t have it, I think we’d likely have been limited to in-house models developed at pharma/biologics companies on proprietary data.”

His insights offer a nuanced understanding of the symbiotic relationship between computational methods and protein research, emphasizing the transformative impact of accessible data on scientific innovation. Furthermore, Mr. Granberry eloquently articulates a foundational principle of biology, stating, “There’s that central dogma of biology: DNA to RNA, RNA to protein, protein to function. So basically, anything that you’re interested in, basically in any living thing, is going to be rooted in some sort of protein or complex of proteins, or collection of them that interact with each other.”

In his words, we discern a profound appreciation for the pivotal role played by proteins in shaping the essence of life itself, underscoring the fundamental importance of unraveling their structures and functions in driving progress across diverse realms of scientific inquiry.

In a recently published study, researchers used AlphaFold to predict the structure of a protein implicated in amyotrophic lateral sclerosis (ALS), a debilitating neurodegenerative disease. The predicted structure revealed a never-before-seen binding site, paving the way for the design of drugs that could potentially slow or halt disease progression. This exemplifies AlphaFold’s potential to revolutionize drug discovery, particularly for complex and previously untreatable diseases.


The chart above depicts the median accuracy of protein-folding predictions in the free-modeling category of the CASP competition over the years. As you can see, there was a significant jump in accuracy in 2018 and 2020, coinciding with the introduction of DeepMind’s AlphaFold systems. This dramatic improvement highlights the transformative power of deep learning in protein-folding prediction.

Deep learning has irrevocably transformed protein-folding prediction. As we delve deeper into protein dynamics and leverage the power of big data, the potential applications are truly boundless. From developing new medicines and biomaterials to a fundamental understanding of how life works at the molecular level, AlphaFold and its successors promise to usher in a new era of biological discovery.

Get Involved

Contact the Midwest Big Data Innovation Hub if you’re aware of other people or projects we should profile here, or to participate in any of our community-led Priority Areas. The MBDH has a variety of ways to get involved with our community and activities. The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.

Data Centers for AI and Quantum Computing

By Jas Mehta

In the rapidly evolving landscape of technology, data centers stand as the backbone of our interconnected world. As demands for computational power, storage, and connectivity continue to surge, the data center ecosystem is undergoing a profound transformation. This blog post explores the interplay of emerging trends, seamlessly integrating artificial intelligence (AI), Co-Packaged Optics (CPO), Compute Express Link (CXL), and other cutting-edge technologies that are reshaping the very fabric of data centers.

Artificial intelligence has emerged as a central force propelling the evolution of data centers. The insatiable appetite for AI applications, from machine learning to deep learning, necessitates a paradigm shift in computational capabilities. Data centers are rising to the challenge by incorporating specialized hardware, such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs), to accelerate AI workloads. This shift towards AI-centric infrastructure not only redefines the computational landscape but also sets the stage for unprecedented efficiency and capabilities within data centers.

Enter Co-Packaged Optics (CPO), a transformative technology that promises to elevate the performance and efficiency of data centers. Traditionally, optical transceivers existed as separate entities from processors, posing challenges in terms of power consumption, latency, and scalability. Co-Packaged Optics integrates these components directly into the processor package, minimizing signal losses and optimizing data transfer within the data center.

This integration not only enhances bandwidth and reduces latency but also addresses critical concerns surrounding space and energy efficiency. As data centers grapple with the escalating demand for higher data rates, CPO emerges as a game changer, streamlining connectivity for optimal performance.

Simultaneously, Compute Express Link (CXL) has garnered attention as an open industry standard facilitating high-speed, efficient connectivity between diverse devices within data centers. CXL seamlessly connects Central Processing Units (CPUs), GPUs, and other accelerators, fostering a heterogeneous computing environment. This versatility is indispensable for data centers navigating the diverse landscape of workloads, including the intensive requirements of AI and high-performance computing (HPC).

Compute Express Link’s impact extends beyond improving data coherency; it fundamentally enhances communication between processors, promising a holistic improvement in overall system performance. The adoption of this standard is gaining momentum, signaling a shift in the architectural paradigm of future data centers.

As we envision the future of data centers, it is essential to consider the broader spectrum of transformative technologies.

Quantum computing, though in its infancy, holds immense promise in solving complex problems exponentially faster than classical computers. As it matures, quantum computing could potentially revolutionize data centers, offering unprecedented computational capabilities for certain workloads.

The future of data centers is a dynamic convergence of groundbreaking technologies, where AI, CPO, CXL, and other emerging trends seamlessly intertwine. As the demand for computational power continues to soar, data centers must not only embrace but actively integrate these innovations. In doing so, they can ensure scalability, efficiency, and optimal performance in the face of evolving technological landscapes. The journey towards the next generation of data centers is an exciting one, marked by transformative technologies that pave the way for a more connected, intelligent, and sustainable future.

Get Involved

Contact the Midwest Big Data Innovation Hub if you’re aware of other people or projects we should profile here, or to participate in any of our community-led Priority Areas. The MBDH has a variety of ways to get involved with our community and activities. The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.

Toward Building Quality Relationships: How Chatbots Can Help Us Practice Self-Disclosure

By Qining Wang

Under the turmoil of social events, from global pandemics to wars and social unrests, mental health is becoming an increasingly greater concern among the public.

According to the Anxiety and Depression Association of America (AADA), anxiety disorders are the most common mental illness in the USA, affecting 40 million adults. Another common mental health illness, depression, affects 16 million adults in the USA, according to statistics from the Centers for Disease Control and Prevention (CDC). The greater awareness and gradual destigmatization of mental health issues have led more people to seek professional help to improve their overall mental well-being.

When working with mental health professionals, self-disclosure is vital to finding the roots and triggers of mental health issues. Self-disclosure is a process through which a person reveals personal or sensitive information to others. It is a crucial way to relieve stress, anxiety, and depression.

Meanwhile, self-disclosure is a skill that one needs to cultivate through practice. It’s a skill we can only practice through constant self-exploration and the courage to be vulnerable.

To investigate alternative ways of practicing self-disclosure, a research team at the University of Illinois at Urbana-Champaign (UIUC) explored chatbots and conversational AIs as potential mediators in the self-disclosure process in a study in 2020. The team leader, Dr. Yun Huang, is an assistant professor in the School of Information Sciences at UIUC and the co-director of the Social Computing Systems (SALT) Lab. The team is mainly interested in context-based social computing system research.

Chatbots are ubiquitous in today’s online world. They are computer programs interacting with humans back-and-forth, like having a conversation. Some chatbots are task-oriented. An example can be a frequently-asked-questions (FAQ) chatbot that recognizes the keywords a person types and spits out a preset answer according to the keywords. Other more sophisticated chatbots, such as Apple’s Siri and Amazon’s Alexa, are data-driven. They are more contextually aware and can tailor their responses based on user input. Both are ideal qualities for designing an empathetic and tone-aware chatbot capable of self-disclosure.

As such, Dr. Huang’s team built a self-disclosing chatbot that can engage in conversation more naturally and spontaneously. The chatbot would initiate self-disclosure during small-talk sessions. It would gradually move to more sensitive questions that encourage users to self-disclose.

To study how chatbots’ self-disclosure can affect humans’ willingness to self-disclose, the team recruited university students and divided them into three groups. Each group would interact with the chatbot at different levels of self-disclosure, from no self-disclosure to low and high levels of self-disclosure.

During the four-week study, the student participants would interact with the chatbot every day for 7–10 minutes. At the end of the third week, the chatbot would recommend that students interact with a human mental health specialist. The researchers would then evaluate students’ willingness to self-disclose to the professional.

The team found that the groups that self-disclosed to the chatbot reported greater trust in the mental health professional than the control group. Participants felt “confused” when the chatbot brought up the human professional. In the experimental groups, they felt that they could listen to the chatbot and share sensitive experiences.

The team noted that, for participants interacting with the chatbot with the highest level of self-disclosure, their trust for the mental health professional stemmed from the trust of the chatbot. Participants’ trust was mainly directed toward the research team and professionals behind the chatbot for the other two groups.

This study highlights how chatbots can be a great tool to help users practice self-disclosure, making them more comfortable seeking human professionals. It is worth noting that, regardless of how sophisticated chatbots can be, they are just mediators between users and mental health professionals.

At the end of the day, the most meaningful kind of self-disclosure can only be found through care, empathy, and understanding. Human to human.

Get Involved

Contact the Midwest Big Data Innovation Hub if you’re aware of other people or projects we should profile here, or to participate in any of our community-led Priority Areas. The MBDH has a variety of ways to get involved with our community and activities. The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.

Exploring Nature Through Imageomics with Professor Tanya Berger-Wolf

By Erica Joo and Qining Wang

We recently spoke with Professor Tanya Berger-Wolf, a pioneer in the area of imageomics who is leading a team to start a new field of imageomics. She is a computational ecologist who is director and co-founder of the nonprofit organization “Wild Me.” Berger-Wolf is also the Director of the Translational Data Analytics Institute (TDAI) and a Professor of Computer Science Engineering, Electrical and Computer Engineering, as well as Evolution, Ecology, and Organismal Biology, at The Ohio State University.

Tanya Berger-Wolf

Observation is fundamental to any biological research. The development of optics technology, such as the inventions of the microscope and the telescope, allowed biologists to observe the world at different scales, from animals living in jungles of millions of acres to DNA in animal cells of several micrometers.

However, as Prof. Berger-Wolf pointed out, those inventions only serve to “augment our ability to look” or “look at more things more carefully.” We are still making observations and searching for patterns with our own eyes, from which arises the caveat: We are not so good at finding patterns when things appear to be random, or when patterns are rare, sparse, subtle, or complex. We can’t answer, for example, whether the stripe patterns of mother zebras are similar to their babies’. The patterns appear to be too similar and too random at the same time to our eyes because human brains did not evolve to “take [the stripe patterns] holistically and quantify them in any meaningful way.”

And that’s where imageomics comes in. Imageomics is following genomics, a field where researchers understand the biology of an organism or a species through their genetic information. In a similar vein, imageomics aims to understand nature through biological information extracted from images.

Computers are the perfect information extractors, because they “perceive” the world differently. Computers can quantify images down to pixels and find patterns that humans do not, or cannot, comprehend. Berger-Wolf pointed out that imageomics, as a “whole new field of science,” allows scientists to answer biological questions that weren’t answerable before because it provides scientists with a new way of observing nature.

The complementary vision of computers is especially prominent in the studies of biological traits, according to Berger-Wolf. Biological traits are the interplay between genes and the environment. They can be physical characteristics such as “beak colors, stripe patterns, fin curvatures, the curves of the belly or the back.” They can also be behavioral characteristics such as possums playing dead or pollen feeding in birds. Being able to observe traits “is the foundation of our understanding of how these traits are inherited and the understanding of genetics,” insights into animal behavior, and ecological and evolutionary theories.

In order for biologists to propose new evolutionary hypotheses to explain biological traits, it is crucial to “make these traits computable.” Starting from a project funded by the National Science Foundation, Berger-Wolf founded Wild Me. This nonprofit organization has an ongoing initiative, Wildbook, that collects images containing animals from numerous sources, including camera traps, drones, and even tourists’ social media posts on YouTube, Instagram, and Flickr.

Those source images serve as a starting point for a branch of research in imageomics, which will allow researchers to develop open software and artificial intelligence for the research community. Those tools would allow biologists to discern biological traits that are too similar or too subtle to their eyes, such as animal coat patterns or species that look alike yet are genomically different. Computer vision would allow scientists to find out whether traits are inheritable or shared by multiple species. Based on those new insights, biologists could then conjure new evolutionary hypotheses and start asking even more interesting questions, to which only imageomics can provide the answers.

Berger-Wolf jokes that she has “multiple research personality,” with a passion for bringing her diverse backgrounds together. By helping to found the new Imageomics Institute, her interests were able to converge. Participating in both worlds—natural and technical—allows her to see “the better way” of working and increasing effectiveness.

She commented that starting conversations between fields increases “mutual respect and understanding of each other’s questions and where we can come together.” Berger-Wolf sums up her career by describing her work as “creating tools that expand our ability to look at more things more carefully and even be able to ask questions that people have never been able to ask before.”

Berger-Wolf is currently working on several projects. One looks at animal coat patterns and correlates them with genetics, heritability, and the overall scientific structure of why some traits are inheritable and others are not. By using imageomics, we are able to understand at a deeper level since humans cannot pay attention to every detail. In another project, she is working on species-level traits of butterflies that mimic other species. Computer algorithms can identify what is similar and different in their appearances, down to the small details. Computers can extract complex information and people can start asking different questions using information normally beyond the scope of human perception.

Berger-Wolf’s recent award for the new Imageomics Institute under the NSF Harnessing the Data Revolution program is extending this work and bringing it to a wider audience. The images to be used as sources come from existing research projects, citizen scientists, organizations like iNaturalist, eBird, and Wild Me, as well as the digitization of the natural history museum collections through the iDigBio project.

There are various opportunities for students at any level and researchers from all over the world to participate in the field of imageomics. Berger-Wolf emphasized that the goal is to have people understand what imageomics is and how it’s significant so that it can be accessible to all.

“It’s not just an opportunity to advance science, but also to engage people in science,” she explains. Her team is built up of multiple researchers and students, sharing a goal of building a community around it. More direct community engagement, outreach events, and conferences are great ways for informing people about imageomics and how people can change the way traits are seen.

“We have incredible privilege to do science. To spend time answering scientific questions that are interesting to us while the public is paying us to do so. It’s important to tell the science to the public, communicate why, and what science brings to the world.”

Get Involved

New community-building activities facilitated by the Midwest Big Data Innovation Hub are continuing throughout 2022. Contact the Hub if you’re interested in participating, or are aware of other people or projects we should profile here. The MBDH has a variety of ways to get involved with our community and activities.

The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.

How Do Scientists Help AI Cope with a Messy Physical World?

By Qining Wang

When we see a stop sign at an intersection, we won’t mistake it for a yield sign. Our eyes recognize the white “STOP” letters printed on the red hexagon. It doesn’t matter if the sign is under sunlight or streetlight. It doesn’t matter if a tree branch gets in the way or someone puts graffiti and stickers on the sign. In other words, our eyes can perceive objects under different physical conditions.

A stop sign. Photo by Anwaar Ali.
Photo by Anwaar Ali via Unsplash

However, identifying road signs accurately is very different, if not more difficult, for artificial intelligence (AI). Even though, according to Alan Turning, AIs are systems that can “think like humans,” they can still present limitations in mimicking the human mind, depending on how they acquire their intelligence.

One of the potential hurdles is to correctly interpret variations in the physical environment. Such a limitation is commonly referred to as an “adversarial example.”

What Are Adversarial Examples?

Currently, the most common method to train an AI application is machine learning, a type of AI process that helps AI systems learn and improve from experience. Machine learning is like the driving class an AI needs to take before it can hit the road. Yet machine-learning-trained AIs are not immune to adversarial examples.

Circling back to reading the stop sign, an adversarial example could be the stop sign turning into a slightly darker shade of red at night. The machine-learning model captures these tiny color differences that human eyes cannot discern and might interpret the signs as something else. Another adversarial example could be a spam detector that fails to filter a spam email formatted like a normal email.

Just like how unpredictable individual human minds can be, it is also difficult to pinpoint the exact origin of what and why machine learning makes certain predictions. Neither is it a simple task to develop a machine-learning model that comprehends the messiness of a physical world. To improve the safety of self-driving cars and the quality of spam filters, data scientists are continuously tackling the vulnerabilities in the machine-learning processes that help AI applications “see” and “read” better.

What Are Humans Doing to Correct AI’s Mistakes?

To defend against adversarial examples, the most straightforward mechanism is to let machine-learning models analyze existing adversarial examples. For example, to help the AI of a self-driving car to recognize stop signs under different physical circumstances, we could expose the machine-learning model that controls the AI to pictures of stop signs under different lightings or at various distances and angles.

Google’s reCAPTCHA service is an example of such a defense. As an online safety measure, users need to click on images of traffic lights or road signs from a selection of pictures to prove that they are humans. What users might not be aware of is that they are also teaching the machine-learning model what different objects look like under different circumstances at the same time.

Alternatively, data scientists can improve AI by teaching them simulated adversarial examples during the machine-learning process. One way is to implement a Generative Adversarial Network (GAN).

GANs consist of two components: a generator and a discriminator. The generator “translates” a “real” input image from the training set (clean example) into an almost indistinguishable “fake” output image (adversarial example) by introducing random variations to the image. This “fake” image is then fed to the discriminator, where the discriminator tries to tell the modified and unmodified images apart.

The generator and the discriminator are inherently in competition: The generator strives to “fool” the discriminator, while the discriminator attempts to see through all its tricks. This cycle of fooling and being fooled repeats. Both become better at their own designated tasks over time. The cycle continues until the generator outcompetes the discriminator, creating adversarial examples that are indistinguishable to the discriminator. In the end, the generator is kept to defend against different types of real-life adversarial attacks.

AI Risks and Responses

GANs can be valuable tools to tackle adversarial examples in machine learning, but they can also serve malicious purposes. For instance, one other common application of GANs is face generation. This so-called “deepfake” makes it virtually impossible for humans to tell a real face from a GAN-generated face. Deepfakes could result in devastating consequences, such as corporate scams, social media manipulation, identity theft, or disinformation attacks, to name a few.

This shows how, as our physical lives become more and more entangled with our digital presence, we can never neglect the other side of the coin while enjoying the benefits brought to us by technological breakthroughs. Understanding both would serve as a starting point for practicing responsible AI principles and creating policies that enforce data ethics.

Tackling vulnerabilities in machine learning matters, and so does protecting ourselves and the community from the damage that those technologies could cause.

Learn More and Get Involved

Curious whether you can tell a real human face from a GAN-generated face? Check out this website. And keep an eye out for the Smart & Resilient Communities priority area of MBDH, if you wish to learn more about how data scientists use novel data science research to benefit communities in the Midwest. There are also several NSF-funded AI Institutes in the Midwest that are engaged in related research and education.

Contact the Midwest Big Data Innovation Hub if you’re aware of other people or projects we should profile here, or to participate in any of our community-led Priority Areas. The MBDH has a variety of ways to get involved with our community and activities.

The Midwest Big Data Innovation Hub is an NSF-funded partnership of the University of Illinois at Urbana-Champaign, Indiana University, Iowa State University, the University of Michigan, the University of Minnesota, and the University of North Dakota, and is focused on developing collaborations in the 12-state Midwest region. Learn more about the national NSF Big Data Hubs community.