Wired: I've been reading about attempts to make agent-based systems for more than 10 years. Why is it so difficult?
Maes: I don't think it's hard. I think people have taken the wrong approach. In the early days of AI, people were very self-confident; they were convinced AI would be the solution to many problems. They put forward a very ambitious goal that I believe we may never achieve: to build agents that are very intelligent, have common-sense knowledge, and understand why people do things. AI researchers have been trying to do this for 15 or 20 years, and haven't seen significant results. The idea of agents really isn't new. There have been people working on agents all along - they just haven't produced many result yet.So, how are the approaches you're using different from those of the past?
We have a less ambitious target. We don't try to build agents that can do everything or are omniscient. We try to build agents that help with the more repetitive, predictable tasks and behaviours.Would there be a specific agent for a specific task?
Right - that's what we've been building so far. The system learns about its user's habits, interests, and behaviours with respect to that task. It can detect patterns and then offer to automate them on behalf of the user. Recently, we have augmented that task with collaboration - agents can share knowledge they have learned about their respective users. This is helpful for people who work in groups and share habits or interests. So those are the techniques we've been exploring: observing user behaviour, detecting regularities, watching correlations among users, and exploiting them.How does the user maintain control with these systems?
We think it's important to keep the user in control, or at least always give them the impression they are in control. In all of the systems we build, the user decides whether to give the agent autonomous control over each activity. So it's the users who decide whether the agent is allowed to act on their behalf, and how confident the agent has to be before it is allowed to do so. Users can also instruct agents, giving them rules for special situations. You can tell the system whether the rule is soft or hard - soft being accepted as a default that can be overwritten by what the agent learns, hard meaning it cannot be overwritten by the agent.How do you see the Internet affecting your work?
The Internet is part of the motivation for agents - it's going to be impossible, if it isn't already, for people to deal with the complexity of the online world. I'm convinced that the only solution is to have agents that help us manage the complexity of information. I don't think designing better interfaces is going to do it. There will be so many different things going on, so much new information and software becoming available, we will need agents that are our alter egos, know what we are interested in, and monitor databases and parts of networks.It won't be how great your software is, it will be how great your agent is?
I'm convinced there will be great pieces of software, but you'll need an agent to help you find them.I see a few problems with the idea of agents: One is that they are never more than 90 per cent accurate. Another is that they can take a significant amount of time to learn my behaviour.
I agree that we never will get 100 per cent accuracy - agents will always make mistakes. But whenever you delegate to someone - be it human or program - you inevitably give up some accuracy. If you give a task to someone else, it will never be done quite the way you want. Delegation is the only way to cope with how much work you have. If you had an infinite amount of time, you wouldn't need to delegate. But the problem is no one has that kind of time. For example, I've never had the time to read newsgroups or to find the ones I wanted, but with the newsreader agent we built, I have news articles suggested to me, and it gives me the time to read them. You have to be careful which tasks you delegate - if the cost of a mistake is high, don't let someone else do it. But many tasks are low-risk. If my newsreader agent gives me an article I don't want, or forgets to give me one I do want, it has already done more than I could do without it. I think you just have to be aware of the cost of a mistake for a particular task and adjust the agent's autonomy accordingly. The learning time is not necessarily a negative feature. Users will have less difficulty accepting agents if they gradually gain their trust. Trust has to be earned and that always takes time. We increased the learning rate once we explored having agents collaborate. We found agents were learning the same things independently. For instance, messages from mailing lists or newsgroups have a lower priority than personal mail. With collaboration, agents can start with shared libraries of experience.What about someone else using my machine - or my agent?
Security is a general computer issue - it's not unique to agents. Security will develop as computers advance. I think people will, for other reasons want their agents - and particularly the knowledge the agent has about them - secured.How long will it be before something like the mail system you've developed becomes a product?
I don't think it will be long at all - I suspect in the next two years.How will agents change the way people use and think about computers?
I hope agents will make people feel more comfortable dealing with the overload of information, more in control. Confident agents are working on their behalf, are reliable, and never become tired; they are always looking to help the user. One problem I see is that people will question who is responsible for the actions of an agent. Especially in situations like when an agent takes up too much time on a machine or purchases something you don't want on your behalf. Agents will raise a lot of interesting issues, but I'm convinced we won't be able to live without them.Scott Berkun (scottber@microsoft.com) is a UI/Usability specialist at Microsoft.