For the self
1. We can think of our perception of ourself in the same way as we perceive others, i.e. actions that we perceive (our own motions, or anything in others) are echoes of intent buried deeper in the "mind". (This is basically intention vs awareness.)
2. In order to explore where this intention comes from and what it "looks like", we need to stop thinking and merely observe. Active thinking gets in the way of observing.
3. Intention is intrinsically linked to sensorial data, but usually the amount of incoming data is too huge to be able to observe fully (like watching an unfiltered packet log). Therefore, start out small - block out certain senses and concentrate on environments that focus on one sense at a time (e.g. closing your eyes while a clock ticks, or a quiet room with a moving light).
(4. Sidenote: Realising that there is no such thing as "free will" (as most people would like to define it) is the first step to creating simulative models of thought?)
For society and evolution
4. Societies are emegent aggregations of individuals. Is the "optimum" individual to be in that society, then, that which is closest to the idea of an "avergae" individual? If so, are societies self-regulatory, naturally imposing forces upon individuals to keep to that average? Furthermore, is this a generic rule for any system made up of "individuals" or nodes? Are there "forces" placed on, say, neural nodes that naturally weed out nodes that are inherently different? If so, are all these systems "survival seeking" by nature? (Stability isn't just an evolutionary advantage - it's required for something to exist in the first place, and non-stability (aka normalisation) = non-existence.) If so, are humans and animals just one (tiny?) system that naturally - i.e. by it's nature of existing - have a want/need/pre-requirement to maintain themselves, but which only we call "survival" for some reason?
5. If 4, does this place altruism vs self-interest in a different realm of understanding? If everything is a fractal series of "survival system" layers, then self-interest could be considered of necessity to any particular "lower layer" (i.e. the individual in a group, a node in a brain, etc) and altruism could be considered of necessity to the upper layer (i.e. a pull towards the larger "entity/system").
6. How, then, do groups evolve? And if individuals are groups, can we scale up the factors that evolve individuals to apply to groups too? This would weigh, it seems, on the idea of how we can create "larger", more stable groups, e.g. on a global scale (but also on a local scale too). Understanding the factors that lead to evolution of any system mean that our actions are morely likely to take effect. Practically speaking, this means we should stop trying to come up with ideas for "progress" merely on the ideas merits, and instead come up with ideas based on how well they'll work. Become an "effectiveness-oriented" thinker.