0
    0
    Your Cart
    Your cart is emptyReturn to Shop

        UC Davis researchers look at trust in data communications

        According to a press release from 10 September 2001, computer security researchers at the University of California at Davis are studying a system that lets exposed, "untrusted" machines go on providing useful, accurate information, even though they might have been infiltrated and compromised. Their method invokes a digital signature from a "trusted" computer that can help verify the integrity of data received from an "untrusted" computer.

        Computer program learns language in artificial childhood

        An Anonymous Coward points out this item from the Reuters news agency on CNN, which deals with an AI langugage-learning research project in Israel, aimed at teaching a computer to use language the same way a human child does. Appropriately for this year of 2001, the program is name HAL. Read about it at:
        http://www.cnn.com/2001/TECH/industry/08/20/comput er.hal.reut/index.html

        Software agents make better commodities traders

        from the buy-low-and-sell-high dept.
        An item from the UK-based New Scientist Magazine reports on research by Jeffrey Kephart at IBM's research center in Hawthorne, NY, that indicates software-based trading agents may be better at trading commodities than humans. In IBM's test, both software-based robotic trading agents (bots) and people had the same set-up, allowing them to trade through an unbiased software-based auctioneer. The auction was designed to mimic the kind of commodities market where buyers and sellers have a fixed amount of time to trade in a single commodity. Six bots and six people traded against each other. Half were buyers and half were sellers. Buyers were given an upper spending limit, while sellers had a minimum sale price. Their goal was to maximise their profit at the end of trading. The software agents made seven per cent more cash than the humans.

        EU initiative aims at integrated systems with "life-like perceptions"

        from the blue-sky-research dept.
        The Future and Emerging Technologies (FET) division of the European Union's Information Society Technologies (IST) Program has launched a new research initiative to develop "life-like perception systems". The objective of the initiative is "to create integrated perception-response systems that are inspired by the sophistication of solutions adopted by living systems. 'Perception' is meant to include sensorial, cognitive, control and response aspects, whether it refers to vision or hearing, or to any other type of interaction with the environment by a biological organism. Such systems would extend the capabilities of machines or be used to augment the human senses."
        More information about the program can be found on the CORDIS website at http://www.cordis.lu/ist/fetbi.htm .

        AI researcher says nanotech won

        from the intelligence-issues dept.
        United Press International science correspondent Kelly Hearn recently interviewed artificial intelligence researcher Eric Chown ("Thinking robots coming, but decades away", 14 July 2001). Chown is a professor of computer science at Bowdoin College in Brunswick, Maine. When asked if nanotechnology will help engineers build machines that better mimic the brain's activity, Chown said: "No, I don't think so. Nanotechnology will provide amazing breakthroughs in the medical domain in terms of robotic surgery and such. But in terms of building human-like robots, I don't think it will contribute greatly. I really think that the big breakthroughs will come in terms of better understanding of how the brain works."
        On the question of whether the future will bring a merging of flesh and machines, Chown said, "merging man and machine is more a short-term issue than the potential long-term issue of machines actually replacing people. In terms of ethical questions, in the short run, I don't see a big ethical problem. If somebody can't see and an optical implant can help them, that's a good thing. But it doesn't take a great leap to see how it could get out of control. We aren't doing enough in society to consider the ethics of the technologies we're developing."

        LA Times columnist favors uploading

        from the chips,-ahoy! dept.
        In a commentary in the Los Angeles Times spurred by the release of the film A.I., Bart Kosko, a professor of the electrical engineering at USC and author of Heaven in a Chip (Random House, 2000), places himself in the intellectual camp that sees a merger of humans and their technology as inevitable.

        "It will be far easier to make us more like computers than to make computers more like us," says Kosko. He concludes: "So forget "A.I.'s" vision of lumbering machines that simply mimic our pre-computer notions of speech and movement and emotions. Brains and robots and even biology are not destiny. Chips are."

        VR systems help envision large data sets

        from the visionary dept.
        A team of researchers at the Center for Image Processing and Integrated Computing (CIPIC) at the University of California, Davis are applying virtual reality to help scientists to see and handle large, complex sets of data. According to the press release on their work, the researchers say the simplest way to handle this data is to make it visible, so that scientists can "see" what is happening in an experiment. Virtual reality allows researchers to interact with the data while they are looking at it, making changes and seeing what happens.

        The center is also offering a graduate-level class in which students learn how to build and work with virtual reality environments.

        Mindpixel project will apply psych test to AI model

        from the real-world-AI dept.
        On a more practical note, the Mindpixel Digital Mind Modeling Project has announced that a standard psychological test used by clinicians worldwide in the evaluation and treatment of adults will be administered to a machine-based artificial personality.
        The Mindpixel Project is a large worldwide AI effort, with nearly 40,000 contributing members in more than 200 countries. The project's goal is to build a highly accurate statistical model of an average human mind which they hope can be used as a foundation for true artificial consciousness. The test will be applied to GAC (Generic Artificial Consciousness — pronounced "Jack"), an artificial personality being developed by Mindpixel. GAC will be evaluated over the next several months to assess its learning of human consensus experience from the Mindpixel project's large and diverse group of users from many different cultures.
        The test will be supervised and interpreted by Dr. Robert Epstein, an expert on human and machine behavior. "Nothing like this has ever been attempted," said Epstein. "We're evaluating thousands of people worldwide as if they were one collective individual . . . We don't know if it is possible to build a normal personality out of millions of little pieces. This experiment will tell us how reasonable the idea is."

        Analysis of Spielberg's move, AI

        from the gradual-future-shock? dept.
        redbird (Gordon Worley) writes "Most of this is filled with spoilers, so I recommend that, unless you've seen the film, don't click read more. For those of you looking for a basic review, this is an okay movie (I'd give it about 2.5 out of 5 stars), but certain aspects of the film really ruin it. Basically, I consider this a cute movie about subhuman AIs and is not dangerous to the public's perception of AIs (in fact, it may actually help it by gradually future shocking them)."

        Read more for the redbird's review . . .

        Researcher describes method to allow AI systems to argue

        from the Open-the-pod-bay-doors,-HAL dept.
        Ronald P. Loui, Ph.D., an associate professor of computer science at Washington University in St. Louis, has described a method for using artificial intelligence that incorporates the ability to argue into computer programs. His work is initially focused on legal arguments.

        Louiís article, "Logical Models of Argument," consolidates research results from the mid-80s to the present. It appears in the current ACM Computing Surveys.
        According to a press release on Loui's work, A.I argument systems permit a new kind of reasoning to be embedded in complex programs. He says the reasoning is much more natural, more human, more social, even more fair. His proposal for A.I. argumentation is based on defeasible reasoning — which recognizes that a rule supporting a conclusion can be defeated. The conclusion is what A.I. specialists call an argument instead of a proof. Defeasible reasoning draws upon patterns of reasoning outside of mathematical logic, such as ones found in law, political science, rhetoric and ethics. Defeasible reasoning is based on rules that donít always hold if there are good reasons for an exception. It also permits rules to be more or less relevant to a situation. In this sense it is like analogy: One analogy might be good, but a different one might be better.

        Privacy Overview

        This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.