Monday, August 22, 2022

A quick walk through of compute : mainframes to containers to smart contract

 

This is my stab at the history of computer usage in business enterprises.

Time Shared Mainframes & Terminals

Early machines were big, heavy, and only for enterprise usage. Long forgotten names like Wang, Sperry, DEC roamed these ground, with IBM was the leader and barely growing today. Because these machines were so big and heavy, users had to time share the usage of these machines from dummy terminals. Some of the machines ran on vacuum tubes. Tubes were hot, and a moth flew into one, melted itself on a tube, and caused the computer to malfunction. The term "de-bug" came from this era.

Mini, Micro, and Personal Computer

A The "personal computer" era started with the semiconductor industry producing cheap, powerful processors. Motorola and Intel were leaders in this field. The big and heavy mainframe machines were shrunk from house sized to desk top sized. Microsoft created DOS, but PC usage was obscure until Windows was introduced.

Client and Server, Monolithic & Service Oriented Architecture

On the enterprise front, software was delivered in a client server architecture. Client is the user facing front of of the computer system, usually a PC or a terminal. The back end of that system, invisible to the user, is the server. It is usually found in a room dedicated to servers, hence called back office. The programs that ran in the server included Enterprise Resource Planning, Accounting, Database. The programs (such as ERP) were written in a single large program - hence monolith. Monolithic programs sometimes needed to talk to other monolith programs, and this was done via a Service Oriented Architecture.

Cloud & RESTful API

The computer server infrastructure sat physically in the office, or in a dedicated remote building called data center. But companies did not like buying, maintaining, replacing servers that they owned. Why not lease them? That's what cloud did. They lease compute, and started to port their original "monolithic" programs to the cloud. What if you had one program used by many users at the same time? If you had 100 users, do you have 100 programs running? That is wasteful. Multi-tenancy technology allowed one program to run, but accessed by 100 users. Its slower, but data is consistent, making the system faster overall.

Cloud Native & Containers

Monolithic programs were not designed to scale to more users, more regions, more storage. And they made upgrade components of the software very hard - need to bring down the entire program, instead of just updating the little piece. Big monolithic programs were broken down into small containers. The containers talk to each other through API or queues. If a component of the program needed to be updated (let's say it is responsible for displaying the latest news - not critical if it is part of a stock trading website), just bring down that container, and replace it with a newer container.

The Future : Edge, WASM, Smart Contract

Some say we have swung to far into the container world. It benefits are obvious (scale, modular maintenance, etc), but it also brings a lot of overhead (connectivity, versioning, etc). We suspect adoption will be selective. New startups will pick the latest compute paradigm. But the old, with its vast amounts of technical debt (think COBOL), will change just enough to function, but not change enough to break. Watch out for more "edge" compute, including Web Assembly (WASM), where your browser is smart enough to perform compute, elevating the status of browser from "show" only to "compute and show". Blockchain "smart contracts" will unify business code from both the application and the network itself into once place - with the added benefit that it can natively handle money via cryptocurrency.



No comments :

Post a Comment