Generative AI Prompts New Areas of University Research | University World News

A+ A- go back

Taiwan’s leading global position in the semiconductor industry – producing 60% of the world’s chips and 90% of the most advanced ones – helps to attract artificial intelligence talent and investment in research, including strong university-industry links that are a prominent driver of research in AI.

Now the emergence of generative AI has led to a mushrooming of research projects to resolve problems stemming from the application of generative AI technologies.

Hsuan-Tien Lin, professor in the department of computer science and information engineering at National Taiwan University (NTU), the island’s leading university for AI research, said the emergence of ChatGPT – which is able to write human-like text – and other generative AI tools means that there are new problems to identify and resolve. This stimulates AI research.

Lin told University World News that his lab at NTU “has lots of generative AI projects. For some projects generative AI works really well, but often we need to fix the parts that generative AI cannot do well”.

“We need to understand, for example, whether it [the problem] is a technology limitation and therefore we need to combine it with other technologies, or [whether] it is a problem that may be rooted in the data. Then we need to think about the data collection strategy and identify the root causes.”

But research involving generative AI is more than just fixing ‘bugs’ or problems in the way generative AI responds to user commands or questions in order to improve accuracy.

Lin argues that this type of research is completely different from traditional scientific or computing research. “Generative AI needs us to define a new process of how we do research,” he said. Lin added that the standard ways of doing things such as instrumentation and designing models is no longer enough.

“With generative AI, a careful evaluation process in research is much more important because the results from generative AI are not always reproducible, so how can we convince ourselves that we are getting true research results rather than ‘hallucinations’?” he said, referring to content that is simply wrong or made up by generative AI tools.

“In the lab we try to ensure that we’re not just testing a few cases repeatedly,” Lin explained.

“So for example, we define the minimum topics or questions we need to ask generative AI in order to get some convincing results. We define, for example, the number of times we ask the same question to generative AI in order to understand the variations.

“We’re trying to make this part of our analysis steps, instead of just checking three cases and being super-happy that generative AI answered our question.”

‘Rough’ use of machine learning tools

New methods of research verification have become important. For example, the use of ChatGPT in information search and retrieval, which is of particular importance to researchers but is also widely used, often daily, by the general public.

“Information retrieval has been considered to be ‘upgraded’ by machine learning and deep learning techniques. But many uses of machine learning in applications may not be the best use or may not be the right use,” Lin noted.

At an international conference on information retrieval held at NTU last July, an important theme was the impact of generative AI and other machine learning techniques on extracting information from the internet.

Lin’s colleague Chih-Jen Lin, a distinguished professor in NTU’s department of computer science, said in a keynote at the conference that some people take a rough machine learning technology and directly apply it to an application without thinking too deeply. “That can cause problems if we don’t conduct rigorous research,” he told the conference.

He noted that when introducing machine learning to information retrieval or other fields, “we need to be careful in understanding what the technique is about in order to ensure the best use”. Instead, many researchers rush to use machine learning without a comprehensive understanding of its techniques and limitations.

They may not think clearly about training, validation and test sets and so “end up with a rough instead of a rigorous use of machine learning methods,” Chih-Jen Lin said. “The phenomenon of rough use of machine learning methods is common and sometimes unavoidable.”

AI moves quickly from research labs to real-world applications. But machine learning models that work well in the lab can fail in real-world use with important consequences.

Replication by different researchers in different settings would expose problems sooner, making AI stronger for everyone, he said. Also, improving the teaching of machine learning would increase rigour and improve the practical use of generative AI tools and techniques.

Information retrieval and LLMs

Information retrieval – in particular the shift from internet search engines which select and rank relevant results from a large corpus of available documents and primary sources, towards summarising and fusing information from multiple documents into a single ChatGPT-generated answer to a user question – produces multiple problems that researchers are seeking to resolve.

Marc Najork, a distinguished research scientist at Google DeepMind, in a keynote speech at the conference, said that “semantic retrieval is a vibrant research area with lots of open research problems”.

Generating a single answer using ChatGPT was not necessarily what users wanted, he added, pointing to a continued need to have results that are transparent and refer to primary sources such as academic papers or newspaper articles.

Lack of transparency prevents AI large language models (LLMS) used to train chatbots, and other AI techniques and tools, from being properly assessed for robustness, bias and safety, he added.

However, “there has been great progress on grounding answers and attributing them to source documents. Benchmarks are emerging to measure progress on these fronts.”

Importance of research to chip industry

Close proximity to the semiconductor industry, which has huge computing power compared to universities, is key to much of the cutting-edge research being carried out in Taiwan, including improvements in generative AI reliability.

“Semiconductor companies in Taiwan are very willing to invest in AI research and talent,” noted Lin. “They are not just hiring people who are doing semiconductor design and manufacturing, but they’re hiring AI people to upgrade their manufacturing processes, to think about add-on applications on top of their chips, and other things.”

The importance of university research to keep Taiwan ahead in the global race was demonstrated last year when Jensen Huang, CEO and co-founder of California-based NVIDIA – which developed the high-end chips that power most AI and generative AI applications – announced a new AI centre at NTU for AI research and innovation.

Huang, who moved from Taiwan to the United States as a child, is feted in Taiwan as one of their own.

Though not involved in establishing the centre, Lin said: “It is likely to use the very strong computing power that can be provided by NVIDIA to boost many of the important research directions at NTU.” These include, for example, NTU’s strengths in speech recognition and speech technologies, which have emerged as an important AI application.

But the NTU AI centre is also an acknowledgement of Asia as a testbed for high-end technologies that complements Silicon Valley in California.

“On machine learning, we have a pretty strong community across Asia, from Japan, Korea, Singapore, from mainland China,” said Lin, noting that this can foster collaboration in the region.

Many AI companies begin with their markets in Asia and have sales sites across many countries in the region, he noted: “So we’re solving lots of the common problems faced in Asia in marketing AI.”

Read more: https://www.universityworldnews.com/post.php?story=20240129205809349

Go Back