-
What is cognition?
-
Cognitive Psychology
- Study of mental processes
-
Intentional Stance (Dennet)
-
Example: reputation
- Folk psychology
- responsibility bias
- Psychology
-
Examples
- priming
- Theory of social evaluation
- Social bonding
- George A. Miller's 1956 Psychological Review article "The Magical Number Seven, Plus or Minus Two"
-
What is left out?
-
Biological processes
-
neural networks
- (but cognitive neuroscience exists)
- drugs
- illness
-
No-mind approaches
-
behaviourism
- Cognitive revolution (1950) as an interdisciplinar movement to focus on the mind
- psychology, anthropology, linguistics, artificial intelligence, computer science, and neuroscience.
- A key idea in cognitive psychology was that by studying and developing successful functions in artificial intelligence and computer science, it becomes possible to make testable inferences about human mental processes. This has been called the reverse-engineering approach. (wikipedia)
-
Usage differs, for example as for the place of emotions
- "The literature on peer review has focused almost exclusively on the cognitive dimensions of evalu- ation and conceives of extracognitive dimensions as corrupting in- fluences. In my view, however, evaluation is a process that is deeply emotional and interactional." (Lamont, 2009)
-
Why having cognitive agents?
-
For the same reason that we have agents..
- Processes and mechanisms are the base of computation
-
Our mind is better at understanding processes than it is to understand complex math
- ... I don't have support for this. Is it true?
-
However, the mind performs much better when presented with familiar terms
- famous Wason 4 (1966) card-turning example
- For the AI dream of reverse engineering
-
For simulating socially complex situations
-
Mindchanging
- to correct self-harming habits
-
Complex decisions based on trust and reputation structures
-
design of targeted inteventions
- For example, smoking ban in Italy and the Netherlands
- as soon as you consider context, simple models (PD, PGG) fail to adequately represent reality
-
What software for cognitive modeling?
-
Traditional, single-agent cognition (thanks to J. Sabater for this part)
-
CLARION
- The Clarion cognitive architecture project aims to investigate the fundamental structures of the human mind by synthesizing many intellectual ideas into a unified, coherent model of cognition. In particular, our goal is to explore the interaction of implicit and explicit cognition, emphasizing bottom-up learning (i.e., learning that involves first acquiring implicit knowledge and then acquiring explicit knowledge on its basis).
- Our research is directed at forming a (generic) cognitive architecture that captures various cognitive processes with the ultimate goal of providing unified explanations for a wide range of cognitive phenomenon. The current objectives of this project are two-fold:
- Developing artificial agents in certain cognitive task domains
- Understanding human decision-making, learning, reasoning, motivation, and meta-cognition in other domains.
- The Clarion cognitive architecture project is headed by Professor Ron Sun and has been supported by such agencies as ONR, ARI, and others.
- Status of publications on the website as of Nov 14
-
SOAR
- Rule-based
-
From Intro:
- Soar is a general cognitive architecture for developing systems that exhibit intelligent behavior. Researchers all over the world, both from the fields of artificial intelligence and cognitive science, are using Soar for a variety of tasks. It has been in use since 1983, evolving through many different versions to where it is now Soar, Version 9.
- We intend ultimately to enable the Soar architecture to:
- work on the full range of tasks expected of an intelligent agent, from highly routine to extremely difficult, open-ended problems
- represent and use appropriate forms of knowledge, such as procedural, semantic, episodic, and iconic
- employ the full range of problem solving methods
- interact with the outside world, and
- learn about all aspects of the tasks and its performance on them.
- In other words, our intention is for Soar to support all the capabilities required of a general intelligent agent.
- In Soar, every decision is based on the current interpretation of sensory data, the contents of working memory created by prior problem solving, and any relevant knowledge retrieved from long-term memory. Decisions are never precompiled into uninterruptible sequences.
-
ACT-R
-
Intro:
- ACT-R is a cognitive architecture: a theory for simulating and understanding human cognition. Researchers working on ACT-R strive to understand how people organize knowledge and produce intelligent behavior. As the research continues, ACT-R evolves ever closer into a system which can perform the full range of human cognitive tasks: capturing in great detail the way we perceive, think about, and act on the world.
- ACT-R is a hybrid cognitive architecture. Its symbolic structure is a production system; the subsymbolic structure is represented by a set of massively parallel processes that can be summarized by a number of mathematical equations. The subsymbolic equations control many of the symbolic processes. For instance, if several productions match the state of the buffers, a subsymbolic utility equation estimates the relative cost and benefit associated with each production and decides to select for execution the production with the highest utility. Similarly, whether (or how fast) a fact can be retrieved from declarative memory depends on subsymbolic retrieval equations, which take into account the context and the history of usage of that fact. Subsymbolic mechanisms are also responsible for most learning processes in ACT-R.
-
Architecture
- Pretty picture.
- Planning experiments: in parallel with psychological ones
- Comparison
- There are two types of modules:
- perceptual-motor modules, which take care of the interface with the real world (i.e., with a simulation of the real world), The most well-developed perceptual-motor modules in ACT-R are the visual and the manual modules.
- memory modules.
- There are two kinds of memory modules in ACT-R:
- declarative memory , consisting of facts such as Washington, D.C. is the capital of United States, France is a country in Europe, or 2+3=5, and
- procedural memory, made of productions. Productions represent knowledge about how we do things: for instance, knowledge about how to type the letter “Q” on a keyboard, about how to drive, or about how to perform addition.
-
BDI approaches (again thanks to J. Sabater)
- based on the Belief-Desire-Intention software model that implements the principal aspects of Michael Bratman’s theory of human practical reasoning
- the BDI model is based on “folk psychology” and was developed only as a way of explaining future-directed intention and not as a general model for cognition.
-
Multi-agent cognitive BDI architectures
-
Jason (thanks to F. Grimaldo for this part)
-
Basic ideas
- Concurrent, multi-agent
- PLAN-based
- Orignal take on plan recovery
-
Technically..
- Internals of the agent based on Agentspeak BDI
- Beliefs represent the information available to an agent (e.g., about the environment or other agents)
- publisher(wiley)
- wiley(publisher)
- fiume_esondato_3_novembre
- fiume(esondato, 3 nov)
- annotation
- fiume(esondato, 3 nov)[belief=0.9, source= francesca]
- fiume(esondato, 3 nov, belief=0.9, source= francesca)
- Goals represent states of affairs the agent wants to bring about (come to believe, when goals are used declaratively)
- • Achievement goals:
- !write(book)
- Or attempts to retrieve information from the belief base • Test goals:
- ?publisher(P)
- An agent reacts to events by executing plans
- Events happen as a consequence to changes in the agent’s beliefs or goals
- AgentSpeak triggering events:
- • +b (belief addition)
- • -b (belief deletion)
- • +!g (achievement-goal addition) • -!g (achievement-goal deletion) • +?g (test-goal addition)
- • -?g (test-goal deletion)
- Plans are recipes for action, representing the agent’s know-how
- An AgentSpeak plan has the following general structure:
- triggering_event : context <- body.
- +!drill_67P
: not battery_charge(low) & drill_ok & current_power(P)
<- drill-at-power(P).
-!drill_67P
: ~drill_ok
<- .
-!drill_67P <- +-current_power(P + 1)
!drill(67P).
- Rolling
- Sleep well.
- Exercise
- Ingredients:
location(a,b) means that object A is at location B
!examine(object) is the subgoal of examining an object
!at(coordinates) is the subgoal of getting the lander at location coordinates
assume that a new belief enters the system,that is, that there is a green patch on a rock, which makes it worth examining: green_patch(r123)
- +green_patch(Rock)
: not battery_charge(low)
<- ?location(Rock,Coordinates);
!at(Coordinates);
!examine(Rock).
- .. and where is the intention? .. wait
- Intentions are commited plans and exist at the level of the reasoning cycle
- ten steps
- 1. Perceiving the Environment
- 2. Updating the Belief Base
- 3. Receiving Communication from Other Agents
- 4. Selecting ‘Socially Acceptable’ Messages
- 5. Selecting an Event
- 6. Retrieving all Relevant Plans
- 7. Determining the Applicable Plans
- 8. Selecting one Applicable Plan
- 9. Selecting an Intention for Further Execution
- 10. Executing one step of an Intention
- figure..
- Jason's reasoning cycle in pictures
- World described in Java, but see..
-
The JACAMO triad
- Jason
- Agents
- Cartago
- Artefacts
- Moise
- Organizations
-
Reputation as a cognitive artefact
-
The theory applies to reputation about a norm.
- On the other hand, all reputation is about a norm, in some sense. Well there is some case of reputation about skill, but even in that case, one could say this is all about the norm that says that you have to perform well (in your profession, for example)
-
About that norm, there are four essential roles.
- T
- E
- B
- G
-
Here, we explain who they are.
-
Reputation involves four sets of agents:
- • a nonempty set T of agents are the targets of the evaluation
- • a nonempty set E of agents who share the evaluation a nonempty set T of evaluation targets
- • a nonempty set B of beneficiaries, i.e., the agents sharing the goal with regard to which the elements of T are evaluated
- • a nonempty set G (gossipers) of agents (also called Third-party) who share the meta-belief that members of E share the evaluation; this is the set of all agents aware of the effect of reputation (as stated above, effect is only one component of it; awareness of the process is not implied).
-
Now, once individuated the sets, we wonder what are the superpositions between them. Is everyone at the same time a target and an evaluator? A target and a gossiper? Or are the roles clearly distinguished?
-
Here is the tree.
-
Don't expand this node!
- TEBG
- T / EBG
- TE / BG
- TB / EG
- TG / EB
- E / TBG
- B / TEG
- G / TEB
- T / E / B / G
- In fact (since permutations do not count) there are only 9 cases.
- have a classification. We can even build a tree.
-
What are the effects of the different situations?
-
We need some hypotheses about the forces in play - what does it means to share the same group, and what it means to be in separated groups.
-
To build a better theory, of course, we would need to specify also what it means to be in the same group.
- Groups are normall considered to be unite, solidal and self-helping
- However, expections can happen - sometimes by design; consider a parliament, whose members are supposed to hold differing views on nearly all politcal matters
- With the exceptions of matters "of national interest" (in the classic nation-state credo), on one hand, and on the matter of members wages, on the other hand.
- Thus, a better theory would consider what are the goals of groups, and differentiate effects on the base of these goals. Not this one.
- To keep things simple, we consider a group as one with common interests and goals, animated by solidarity , cohesion and harmony of members towards each other.
-
We examine the results in the terms of
- the positive/negative bias that they produce
- the amount of response they elicit (tendency to provide an answer even if uncertain vs. to remain silent)
- Combining the values of such intersections gives rise to countless situations. By reducing each intersection to a binary dimension, where values are either high or low, we can describe a finite (but still too large) variety of examples. We call attention on two rather extreme situations: gossip among students, and eBay.
- the higher the intersect between G and E, the higher G's commitment, and therefore their responsibility and attitude to provision. On the contrary, the overlapping between G and B (and, what is the same, between E and B) gives rise to a beneficiary-oriented benevolence, with the consequent negative bias. Instead, a higher intersect between G and T (or between E and T) leads to the leniency bias. Finally, the intersection between T and B concerns the perception of effects of gossip on targets. The higher this perception, the stronger the expected responsibility of gossipers.
- Evidence produced in the relevant literature, cited in the second section of this paper, matches these expectations. As a matter of fact, a system characterised by these intersections among agent roles does not qualify as reputational, according to our analysis, but rather as a system for image formation, augmented by centralised collection and distribution. These rather extreme examples show the advantages of the model presented so far: it allows for concrete predictions to be made and tested against available evidence, concerning both real life examples and technological applications. Predictive models are much needed especially in the latter domain, where theory-driven expectations are merely economic (game-theoretic). They concern the positive effects of sellers? profiles on economic efficiency, rather than the functioning of reputation itself. Our claim is that feedback profile is less than reputation. Of course, even though a system like eBay is not a truly reputational system, it seems to be good enough as to meet market criteria (i.e., volume of transactions and level of prices). However, how healthy and stable is a market (whether electronic or traditional) characterised by feedback under-provision and overrating? More generally, what are the specific effects of overrating and underrating? To answer these questions, we will turn to artificial, simulation-based, data.
-
References
-
BDI and agentspeak
- Rao, A. S. (1996). AgentSpeak(l): BDI agents speak out in a logical computable language. In Van de Velde, W. and Perram, J. W., editors, Agents Breaking Away, volume 1038 of Lecture Notes in Computer Science, pages 42-55. Springer Berlin Heidelberg.
- Bordini, R. H., Hübner, J. F., and Wooldridge, M. (2007). Programming multi-agent systems in AgentSpeak using Jason. John Wiley & Sons.
-
Reputation
- Repage
-
Group size and punishment
- GiardiniMABS2014_Post-Proc02.pdf