![]() Williams reacted to the AI-ChatGPT off-world setting in full disclosure mode. She is a participating scientist on the NASA Curiosity and Perseverance rover missions that have robots scouting about on Mars. "How funny that we still argue about the definition of life as we know it, and we're starting to use a tool in that search that also stretches the definition of life," said Amy Williams, assistant professor in Geological Sciences at the University of Florida in Gainesville. Personally, I see AI useful as a tool, and I will confine it as that." Knowledge cutoff It is about who we really want to become as a species. Unfortunately, it is part of a much broader, and very disturbing, discourse on the (mis) use of AI," Cabrol concluded. "Of course, that's not a chip in our brain and that's only a paper, you will say. Transhumanism can be defined as a loose ideological movement united by the belief that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology. Mistake and learning are other words for 'adaptation'," she said.īy letting AI getting into what makes us human, we are messing with our own evolution, Cabrol added, and she sees specters of "transhumanism" in all of this. "We are creative beings and we are not perfect," she continued, "but we learn from our mistakes and that's part of our evolution. "This depends in which world you want to live and what part you want left to humanity," Cabrol said. "But by whom? I would assume that if you let algorithms do the job for you it's because you assume they will be less biased and do a better job? Following that logic, I would assume that a human is not qualified to review that paper." Specters of "transhumanism"Ĭabrol senses that a next question is: Where do we stop? What if all researchers ask AI to write their research grant proposals? What if they do and don't tell? Then, I am being told that it's okay because the paper will be reviewed," Cabrol said. "But let's assume for a moment that I let this algorithm write it for me. It is a great time where I see my work coming to fruition and can put my ideas together on paper," Cabrol said, and sees that as an important part of her creative process. ![]() On the other hand, like any human tools, they are double-edged swords and sometimes lead people to start thinking "nonsense," Cabrol added, and she believes that to be the case here. We actually do that already every day, in one form or another," she added, "and improved versions might make things better." "AI is a formidable tool and should be used as such to support humans in their activity. Perhaps that is the strongest question, said Nathalie Cabrol, Director of the Carl Sagan Center for Research at the SETI Institute in Mountain View, California. (Image credit: NASA/JPL-Caltech/ASU/MSSS) First things first Ingenuity has been used as a 'scout' to help identify locations for Perseverance to study. NASA's Ingenuity Mars helicopter, photographed by the agency's Perseverance rover on April 16, 2023. Every such dataset requires painstaking efforts to sort out."įor the near term, Ruff thinks AI could be used for rover operations, like picking targets to observe without humans in the loop, and for navigation. "I'm skeptical that any AI, trained on existing observations, could be used to confidently interpret new observations without humans in the loop, especially with new instrument datasets that have not been available previously. "My immediate reaction is that it's highly unlikely that 'on-the-spot' manuscripts would be a realistic scenario given how the process involves debates among the team over the observations and their interpretation," Ruff said. Steve Ruff, associate research professor at Arizona State University's School of Earth and Space Exploration in Tempe, Arizona, is keenly tied to studying Mars. His advice is to not use ChatGPT "in areas where we cannot accept any error." Humans in the loop ![]() "I believe humans can still do better work than ChatGPT, even if it is slower," he said. Ozcan said he's not sure if ChatGPT would be valuable if there is no prior volume of work for it to analyze and emulate. "ChatGPT is not 100% accurate and it is prone to 'hallucination.'" "It could be done but there could be misleading information," said Sercan Ozcan, Reader in Innovation and Technology Management at the University of Portsmouth in the United Kingdom.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |