top of page

Background

The field of Systems is large, diverse and covers a range of subjects from the ‘tangible’ such as computing and cybernetics through to the abstract and philosophical such as epistemology, ontology, etc.

 

There has long been an interest in artificial life or intelligence, from mythical Greek automata (Talos) through Frankenstein’s monster to current ‘artificial intelligences’ (AI) that are capable of learning and making decisions in financial markets, identifying malevolent email, playing virtual board games, etc.

 

Systems, as a recognised subject in its own right, has been around since the 1940s. Notable influences, that are relevant here, are W Ross Ashby who authored “Design for a Brain” and Stafford Beer who invented the Viable Systems Model (VSM). The VSM describes the critical components necessary for any system to be viable, although Stafford Beer applied this mainly to organisational management (as in his book “Brain of the Firm”)

 

‘Popular’ AI projects are often surrounded by hype. Whenever a computer beats a human at a board game (‘with a move that it learnt itself’), the media goes to 11 on the frenzy dial because, apparently, machine consciousness is now here and we’ll soon have robot overlords. This is a pity because it misleads us into believing that when computers are powerful enough, they will be capable of conscious thought in the same way that humans are (or even ‘better’ than humans). The elephant in the room, that organisational systems are already independently active, running things and beyond full human control, is routinely overlooked.

 

For reasons speculated upon elsewhere, computers, of the current type, no matter how powerful, will never be capable of consciousness or truly intellectual thought, but a System can be capable of these things. Put another way, computer based AI (in regard to machine consciousness) is an engineer’s attempt at solving a problem that may be largely philosophical. ‘Machine’ AI may therefore never produce anything beyond complex automata, technically impressive though that might be.

 

By comparison, as a system is ‘a collection of parts put together for a purpose’, a system may contain computers, processes, people, etc - as in any typical organisation.  If this system contains conscious, intellectual subsystems (‘people’), it follows, logically, that it must already have conscious and intellectual capability. This perspective is not dissimilar to some of the theory behind hybrid intelligent systems.

 

An organisational system therefore has the potential of being more cogniscent than any individual person (or AI machine). However, to be optimal, all of the individual components necessarily have to work together as a whole system, not just random parts that happen to share an office building, for example. The whole can and should be greater than the sum of its parts.

 

This all seems obvious.

 

However, how many organisations actually are even merely optimal? At a very simple level, anyone who has made an enquiry with a firm, only to be bounced around from department to department, or told one thing by one manager in contradiction to another, or found it virtually impossible to resolve a problem such as an overpayment, will not come away thinking, “This organisation has intelligence”.  It is just the opposite – it is an organisation's inability to deal with a basic 'challenge' from its environment. The worst organisations wouldn’t even be aware that this is a problem, let alone do anything about it. The equivalent in a human ‘system’ would be a person responding to a question with a verbal answer while their hand writes a contradictory answer, neither of which are correct. And then doing the same thing next time.

 

At a higher level, even in efficient firms, where a challenge is dealt with properly, it is most likely because the firm has a very refined systematic process for dealing with a known and fixed range of challenges.  However, it will not necessarily be conscious enough to deal with a novel challenge or, for example, to identify a new type of threat.

 

It is possible, or even likely, that many organisations are reaching nowhere near their full potential and that not only can organisations be improved, but improved beyond anything current by an order of magnitude. Practically, they could be more responsive, more efficient, etc. More speculatively, they could be conscious entities in their own right, which also raises some interesting ethical and philosophical questions.

 

There are also very efficient but unethical organisations that are having a negative effect on the world but are not appropriately challenged.

 

The project is quite open, but the main vehicle for the exploration of these subjects is a ‘synthetic persona’ - Beltis Steamburton. This could just as easily have been any organisational system such as a special interest group, society or a business, but making it more personable helps keep the emphasis on systems as autonomous entities. Abstract questions about identity and purpose are also perhaps more easily engaged with in more human terms, as readers of this will most likely be human (at least for the foreseeable future).

bottom of page