• 0

Question

Kubernetes - The New Elephant in Every Server Room

 

Updating in Progress - This post and the few following ones might look incoherent for a while (a few days or weeks perhaps)

Please leave right away and skip reading anything in this thread if you are not already interested in Kubernetes - Incoherent means nothing will make much sense as items get moved into rough categories. This thread has useful information but is NOT yet ready to be an INTRODUCTION to the subject.

The text currently consists of cut n paste of snippets of posts I made in various times and places in different Neowin threads

It is meant as a public service to eventually inform people of an important and pervasive technology invented by Google that that entire computing industry has decided to adopt like it or not. Please feel free to contribute material to help in the effort but pointing out right now that this is an incoherent mess won't be very useful.

 

 

All thoughts and various musings on the subject are welcome

 

I'm planning to move my info here from another thread to keep Kube stuff in one spot as it continues to take over the world until literally everything we see on the internet will be managed by Kubernetes...

 

The name is super geeky, originally Project Seven, inspired by Seven of Nine and the code itself excerpted from the internal Google BORG system. Kubernetes means Helmsman

 

"Its development and design are heavily influenced by Google's Borg system, and many of the top contributors to the project previously worked on Borg. The original codename for Kubernetes within Google was Project Seven, a reference to Star Trek character Seven of Nine that is a 'friendlier' Borg. The seven spokes on the wheel of the Kubernetes logo is a nod to that codename." https://en.wikipedia.org/wiki/Kubernetes

 

 

 

 The subject of "Kubernetes" covers a lot of things and creates a software architecture and a hardware architecture that is vastly different from the previous "normal"

 

So if one knows they are "small" and plans to stay "small" then the re-architecting is far too large a pain point, but if there is a chance of growth then re-architecting later on is even more of a pain point!

 

Once you get your head around it, the architecture impression goes from "man that's complicated and ugly" to "wow, that's really delightfully elegant" - but without a huge desire to scale or be compat with current server thinking then advantages might be more subtle.

 

We all know dev trends can be more fashion than substance so I can say for once that this one is all substance in that it works to scale out to billions of containers and what impresses me a lot on a small scale is it is the first off the shelf dev tech that has "self healing" baked in from the start. i.e. if Kubernetes detects an unresponsive container, it destroys it and starts another one. This works because containers are immutable.

 

And there starts a journey. Immutable containers creates the first re-architecting pain point as all STATE must be outside of a container.

 

Oh, and lets back that up to say EVERYTHING is in a container that Kubernetes controls the life cycle for. So whatever you have in a container can end up on any physical server at any moment, can be started on demand load, can be killed if resources needed elsewhere etc.  So even talking to your container needs a system as IP changes etc. That gets acheived through a "Service Mesh" which is an elegant concept. The mesh figures out where your app container is and provides "ingress" to it.

 

Containers done right are small, light and form a whole application when combined into Pods but "monolithic" containers are also a workable intermediate step and some examples of that can be found on the Bitnami Kubernetes site: https://bitnami.com/kubernetes

 

My previous (incomplete) notes:

 

 

Link to comment
Share on other sites

14 answers to this question

Recommended Posts

  • 0

The case for "Small" Kubernetes

 

 

I have not given much thought to Kubernetes in areas of "small scale" or "no scale" other than the usual need to download and run a demo of some kube based app on a dev box.

 

I have been trying to launch a start-up for the last two years with a Kube based AI system for which funding has finally run out and my main partner gave up on due to stress, it but it was based on the idea of a local on-premise core that could scale out on-premise or via cloud i.e. hybrid. So like all the start-ups I've been onvolved in, you learn a lot of things (and not just how bad venture captital is!)

 

 So one little thing that applies to the concept of "small" Kubernetes my idea of a design-canvas to create a user friendly front end to both kube setup/control and AI pipeline/workflow control. Which is a long winded way of saying that "Kubernetes in the Small" probably depends on "Ease of implementation and use" which has not been a focus for the large scale DevOps  guys...

 

Step 1 would be to properly containerize an application and before Kubernetes took over the world, just the learning curve on adapting to Docker was not simple. Docker made it easy to construct containers based on other containers and the I.T. world went nuts doing the exact opposite of what you should do with a container by shoving entire Linux distros inside them. They had some weird fun doing this and the overall insanity of it actually seemed to speed up the adoption of Docker everywhere since it seemed just like a VM.

 

A container should be lightweight with nothing but the application and bare minimum of support libs and it should retain no state. Without Kubernetes, there was still a journey involved to properly combine lightweight "App only" containers into a group of containers that could be deployed.

 

Automating the deployment of various container "groupings" became the next thing worth automating and things like Docker Swarm (and Rancher and Mesos) were developed for that.

 

Then Google decided to OSS their Borg deployment system and name it Kubernetes. It swamped the other solutions out there because it was based on mature code that was being used daily on billions of containers. All the issues everyone else was struggling with, Google had already figured out...

 

I still have not really began to think about "Kubernetes Small" and as already stated there might NOT be a use-case there, but I like the idea of "Hot Swap" on containers, on "Self-healing" by destroying unresponsive containers that are safely immutable and even the minor demand-load scale out to a tiny additonal cluster etc. Within this "Kubernetes Small" that still retains the Service Mesh you can also envision pipeline switching various sorts or even dynamic assembling of processing trees or DAGs..

 

 

Link to comment
Share on other sites

  • 0

Intro to Kubernetes

 

If you plan to grow/expand in the future then staring out with the right architecture means you just add stuff later instead of re-design stuff later. For better or for worse, HUMANITY has come together like some butterfly effect thing to land solidly on Kubernetes for server side ANYTHING. Anybody who is anybody in the computing industry is now a member of the Cloud Native Computing Foundation (Oracle was the last major player holdout) which specifies sometimes in a very specific way, more often in a general way the right way to do anything with servers and clouds:

 

https://www.cncf.io/about/members/ - A Who's Who list of ALL the players in the Computing Industry

https://www.cncf.io - The CNCF is technically under the Linux Foundation, but Windows Containers are part of the CNCF standard

https://landscape.cncf.io - an interactive filterable list of ALL the server software and O/S that conform to CNCF

 

EVERYTHING LIVES IN A CONTAINER. Containers are immutable so they can be constructed, torn down, restarted, moved anywhere in the physical clusters there is available RAM and CPU and all of the complex stuff is managed by Kubernetes.

 

So, you no longer think of servers and which server is doing what. It takes a bit to set it up right but then MAGIC HAPPENS. If you adapt to Kubernetes then you just add servers down the road and it handles what goes where, if you add cloud Kubernete servers, it adds them in. If you get an order for a million embroideries, Kubernetes fires up a million containers if you need it. Google is deploying billions of things on containers every day.

 

Back to earth, the minimum Kubernetes config is 3 servers. But you can go super-starter and run those 3 servers as VMs on a single server. (You can also replace those 3 server VMs with MINIKUBE, but I don't suggest that for any production usage) Again, the allocation unit is a Container, not a server. So adding servers just becomes a demand-load kind of thing. If you get a cloud contract, you can literally run embroidery on a million containers within minutes...

 

So that is just a rough sketch of server architecture. Your down to earth requirements don't need the equipment you selected and would actually make expanding harder in the future since you need more servers eventually so 3 cheap ones gets your Kubernetes minimum 3 server config  onto real hardware as a starting point a bit faster than a huge monlythic one right now.

 

You could dust off some old PCs that are new enough to run a Hypervisor (any old PC first gen i-Series or later) and then just boot to Kubernetes on Bare Metal.

 

You also just set up that minimum config in the Cloud, but even that's too much work because Cloud now offers directly hosted Kubernetes.

 

Azure Kubernetes Service (AKS)

https://azure.microsoft.com/en-ca/services/kubernetes-service/

 

Azure Kubernetes Service (AKS)

https://kubernetes.io/docs/setup/turnkey/azure/

 

Kubernetes on AWS

https://aws.amazon.com/kubernetes/

https://aws.amazon.com/eks/

 

Running Kubernetes on AWS EC2

https://kubernetes.io/docs/setup/turnkey/aws/

 

 

 

Link to comment
Share on other sites

  • 0

Windows Containers

 

a HUGE sea change in the design, architecture, deployment and real time delivery of modern enterprise (and anything large) applications to users and that is the Kubernetes revolution. At this point EVERY enterprise player has signed on to this architecture and it has arrived and will be considered as mandatory dial tone infrastructure within a few years, if not right now.

 

I point this out in your case because any establishment using wonderful .NET technology may have missed some of the signals and messaging around this architecture since on first glance it seems to be about some stuff a bit distant to .NET platforms, "Cloud" and Linux. Even if it is not possible to shoehorn a legacy system into the new way of doing things, there may be opportunities to build in compatibilities as you go along...

 

The standards around this architecture is run by the CNCF (Cloud Native Computing Foundation) (part of the Linux Foundation) and it can easily be missed that it describes the future of Enterprise Computing BOTH for Cloud and On-Premise and ALSO both for Linux and for Windows. Microsoft is a PRIMARY member of this foundation. There is no restriction on following a CNCF standard on local servers and with Windows technology. In fact some of the tech is already baked right into the Windows API.

 

Skipping all the crap in between, the beautiful result of twisting application architecture into many Docker Containers managed by Kubernetes is that the application becomes robust, scalable, hot deployable and most importantly for enterprise, Self Healing with zero downtime. Kubernetes manages the life cycle and moves containers around as needed by resource requirements, best fit, and demand loading. All the infrastructure is free OSS, can run on local servers and dev machines (well beefy ones...) and once working, scales with zero or little effort to larger local clusters or the Cloud since it is a standard supported by every Cloud provider.

 

The downside is a bit of head scratching to understand where to store state when the application containers are stateless (only way to get self-healing) and how to talk to your application when Kubernetes might have moved it anywhere!

 

Windows 10 and the latest Windows Server has native code built into the Windows API to support both native Windows Containers and Linux Containers. The latest version of .NET Core thrives in this flexible cross platform ubiquitous environment.

 

https://www.cncf.io

https://www.cncf.io/about/members/

https://landscape.cncf.io

https://www.docker.com/products/windows-containers

 

Windows Containers on Windows 10

https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-10

 

Linux Containers on Windows 10

https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-10-linux

 

CNCF_TrailMap_latest.png

 

Link to comment
Share on other sites

  • 0

Persistence in a Kubernetes/Docker Environment

 

Persistence is tricky in an orchestrated container environment because you can't normally store anything in a container. As long as containers are immutable, Kubernetes can spin them up/down and move them all around, do the Locomotion on them and fit them into cluster space like Tetris!

 

Of course you can have storage, it just has to be planned out in a way that helps Kubernetes maintain maximum freedom to manage things. This also ends up being a very good clean architecture for your server app design.

 

The subject area falls into File Systems and Generic Object Storage and CNCF savvy Databases. Although I'll list all the OSS Databases involved, the popular Commercial databases like Microsoft SQL Server and Oracle are part of the CNCF like every computer industry player that matters on Planet Earth and I will try to mention those as well.

 

 

Performance metrics

 

 

TBD

 

 

File Systems and Object Storage

 

Rook: Storage Orchestration for Kubernetes
https://github.com/rook/rook

 

ceph/ceph: Ceph is a distributed object, block, and file storage platform
https://github.com/ceph/ceph

 

chubaofs/chubaofs: a distributed file system for cloud native applications
https://github.com/chubaofs/chubaofs

 

container-storage-interface/spec: Container Storage Interface (CSI) Specification.
https://github.com/container-storage-interface/spec

 

dragonflyoss/Dragonfly: Dragonfly is an intelligent P2P based image and file distribution system.
https://github.com/dragonflyoss/Dragonfly

 

coreos/flannel: flannel is a network fabric for containers, designed for Kubernetes
https://github.com/coreos/flannel/

 

gluster/glusterfs: Gluster Filesystem - (this is only a public mirror, see the README for contributing)
https://github.com/gluster/glusterfs

 

longhorn/longhorn: Cloud native, distributed block storage build on and for Kubernetes
https://github.com/longhorn/longhorn

 

minio/minio: MinIO is a high performance object storage server compatible with Amazon S3 APIs
https://github.com/minio/minio

 

openebs/openebs: Leading Open Source Container Attached Storage, built using Cloud Native Architecture, simplifies running Stateful Applications on Kubernetes.

https://github.com/openebs/openebs

 

opensds/opensds: Hotpot: OpenSDS Controller Project

https://github.com/opensds/opensds

 

rexray/rexray: REX-Ray is a container storage orchestration engine enabling persistence for cloud native workloads
https://github.com/rexray/rexray

 

apache/avro: Apache Avro
https://github.com/apache/avro

 

leo-project/leofs: The LeoFS Storage System
https://github.com/leo-project/leofs

 

moosefs/moosefs: MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System
https://github.com/moosefs/moosefs

 

open-io/oio-sds: OpenIO Software Defined Storage, Flexible + Smart + Fast
https://github.com/open-io/oio-sds

 

openstack/swift: OpenStack Storage (Swift)
https://github.com/openstack/swift

 

OSS Databases

 

 

arangodb/arangodb: 🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.
https://github.com/arangodb/arangodb

 

apache/incubator-druid: Apache Druid (Incubating) - Column oriented distributed data store ideal for powering interactive applications
https://github.com/apache/incubator-druid

 

YugaByte/yugabyte-db: The high-performance distributed SQL database for global, internet-scale apps.
https://github.com/YugaByte/yugabyte-db

 

pingcap/tidb: TiDB is a distributed HTAP database compatible with the MySQL protocol
https://github.com/pingcap/tidb

 

sorintlab/stolon: PostgreSQL cloud native High Availability and more.
https://github.com/sorintlab/stolon

 

antirez/redis: Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, HyperLogLogs, Bitmaps.
https://github.com/antirez/redis

 

scylladb/scylla: NoSQL data store using the seastar framework, compatible with Apache Cassandra
https://github.com/scylladb/scylla

 

prestosql/presto: Official home of Presto, the distributed SQL query engine for big data
https://github.com/prestosql/presto

 

postgres/postgres: Mirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work with pull requests on github. To contribute, please see https://wiki.postgresql.org/wiki/Submitting_a_Patch
https://github.com/postgres/postgres

 

pilosa/pilosa: Pilosa is an open source, distributed bitmap index that dramatically accelerates queries across multiple, massive data sets.
https://github.com/pilosa/pilosa

 

percona/percona-server: Percona Server
https://github.com/percona/percona-server

 

OpenTSDB/opentsdb: A scalable, distributed Time Series Database.
https://github.com/OpenTSDB/opentsdb

 

mongodb/mongo: The MongoDB Database
https://github.com/mongodb/mongo

 

neo4j/neo4j: Graphs for Everyone
https://github.com/neo4j/neo4j

 

MariaDB/server: MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry.
https://github.com/MariaDB/server

 

hazelcast/hazelcast-jet: A general purpose distributed data processing engine, built on top of Hazelcast.
https://github.com/hazelcast/hazelcast-jet

 

hazelcast/hazelcast: Open Source In-Memory Data Grid
https://github.com/hazelcast/hazelcast

 

apple/foundationdb: FoundationDB - the open source, distributed, transactional key-value store
https://github.com/apple/foundationdb

 

dgraph-io/dgraph: Fast, Distributed Graph DB
https://github.com/dgraph-io/dgraph

 

juxt/crux: Open Time Store
https://github.com/juxt/crux

 

crate/crate: CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time.
https://github.com/crate/crate

 

apache/cassandra: Mirror of Apache Cassandra
https://github.com/apache/cassandra

 

bigchaindb/bigchaindb: Meet BigchainDB. The blockchain database.
https://github.com/bigchaindb/bigchaindb

 

apache/carbondata: Mirror of Apache CarbonData
https://github.com/apache/carbondata

 

apache/ignite: Mirror of Apache Ignite
https://github.com/apache/ignite

 

rethinkdb/rethinkdb: The open-source database for the realtime web.
https://github.com/rethinkdb/rethinkdb

 

joyent/manta: Manta, Triton’s object storage and converged analytics solutions, is a HTTP-based object store that uses OS containers to allow compute on data at rest.
https://github.com/joyent/manta

 

orientechnologies/orientdb: OrientDB is the most versatile DBMS supporting Graph, Document, Reactive, Full-Text, Geospatial and Key-Value models in one Multi-Model product. OrientDB can run distributed (Multi-Master), supports SQL, ACID Transactions, Full-Text indexing and Reactive Queries. OrientDB Community Edition is Open Source using a liberal Apache 2 license.
https://github.com/orientechnologies/orientdb

 

cockroachdb/cockroach: CockroachDB - the open source, cloud-native SQL database.
https://github.com/cockroachdb/cockroach

 

attic-labs/noms: The versioned, forkable, syncable database
https://github.com/attic-labs/noms

 

mysql/mysql-server: MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.
https://github.com/mysql/mysql-server

 

couchbase/manifest: Top-level source repository for Couchbase Server source code and build projects
https://github.com/couchbase/manifest

 

 

 

 

 

 

Link to comment
Share on other sites

  • 0

Next

 

Something like a snapshot of a system or a database is very primitive compared to Container Self-Healing which is a kind of quantum leap first step towards a Holy Grail of computing. It works. It has not beaten down the doors of anyone's attention since it can be seen as "limited" in that it needs major changes to the architecture of things. It is a remarkable by-product of Docker Containers being stateless where an entire application image becomes the (huge) equivalent of a stateless HTTP request.

 

Normal stuff you expect in your VM server world that is missing in zillions of amorphous clusters of Dockers:

 

1. You need a Service Mesh to locate and talk to your App:

(your App moves around, changes IP address, adds copies of itself on demand load, etc)

 

Examples of CNCF Solutions:

https://linkerd.io  https://github.com/linkerd/linkerd2

https://www.getambassador.io  https://github.com/datawire/ambassador

https://www.envoyproxy.io https://github.com/envoyproxy/envoy

https://traefik.io  https://github.com/containous/traefik

 

2. You most likely will need a file system composed of specialized Containers:

(your App can be destroyed, moved etc so like a HTTP request nothing is retained locally)

 

Examples of CNCF Solutions:

https://rook.io https://github.com/rook/rook

https://www.openebs.io  https://github.com/openebs/openebs

https://min.io  https://github.com/minio/minio

 

3. You will need State management

 

- can be as simple as using Container Native Storage (#2 above)

- or a DB (ideally a CNCF standards compliant DB)

- or a "Serverless" API

 

but #3 is a more complex subject for another day...

 

But also, for once, the complexity ends up yielding a very real simplicity which is why Google, who invented Kubernetes is running BILLIONS of containers every day!

 

https://kubernetes.io

https://azure.microsoft.com/en-ca/services/kubernetes-service/

https://en.wikipedia.org/wiki/Kubernetes

 

-------------------

 

So, I started talking about containers and stuff because the only conceivable reason I could think of for VMs on a tiny 8 gig RAM footprint was Learning. And modern VM tech is focused on efficient container deployment across clusters and to some extent for development and learning purposes, a cluster farm can be simulated on a single box with some VMs - One could start with MiniKube or use the included infrastructure in the latest Docker for Windows which uses Hyper-V and includes Kubernetes.

 

https://kubernetes.io/docs/getting-started-guides/minikube/

 

https://docs.docker.com/docker-for-windows/kubernetes/

 

Download for Edge version of Docker that includes Kubernetes, install on any Hyper-V capable Windows computer:

 

https://download.docker.com/win/edge/Docker for Windows Installer.exe

 

"Kubernetes is only available in Docker for Windows 18.02 CE Edge. Kubernetes support is not included in Docker for Windows 18.02 CE Stable. To find out more about Stable and Edge channels and how to switch between them, see General configuration.

Docker for Windows 18.02 CE Edge includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.

The Kubernetes server runs within a Docker container on your local system, and is only for local testing. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.

 

See Docker for Windows > Getting started to enable Kubernetes and begin testing the deployment of your workloads on Kubernetes."

 

(Native support for Docker Containers is built into Windows 10 and Windows Server)

 

-----------------------------------------

NOTE TO READERS:

 

For anyone passing by and going WTF, Google uses Kubernetes to deploy Everything to containers on Billions of servers. They donated this infrastructure to the OSS community and it is now humantitie's defacto standard for deploying applications to servers and hence it is the first thing somebody would want to learn if they want to play with modern server deployment.

 

Every major computing company in the industry has come together around this standard and have formed the umbrella organization CNCF - Cloud Native Computing Foundation (The word Cloud can be confusing as it all applies to local servers as well)

 

https://www.cncf.io/

 

https://github.com/cncf/landscape

 

-----------------------------------

 

Well in case it is interesting to anyone, most of the Linux server world uses RHEL or the free version, CentOS with Fedora being the bleeding edge development vehicle for them.

 

To get a lightweight base for Containers, Docker uses Moby and a lot of people use Alpine as well for that.

 

If you install the handy Docker for Windows on Windows 10, it will actually setup all the Hyper-V config you need and install Moby. This is all actually supported inside the Windows Kernel so that on Windows 10 each Container runs isolated in it's own Hyper-V instance for development purposes but on Windows Server or Linux, multiple Docker Containers run in a single VM for increased efficiency while still supplying application level virtualization.

 

Then Kubernetes will run 500 of these containers per VM Cluster and scale that out as needed. This works pretty much the same on either Windows or Linux as code is increasingly being shared between the Linux and Windows kernels.

 

Once you essentially forget about the O/S and separate stuff into Containers, all the efficiencies of Hypervisors just get multiplied.

 

Google is currently running it's stuff right now in about 2 Billion containers managed by Kubernetes.

 

 

Link to comment
Share on other sites

  • 0

But What About Windows?

 

The computing industry is a giant ecosystem and like real ecosystems, sometimes there is cooperation and sometimes there is competition. It would be easy for somebody who likes using Microsoft tech, to think that CNCF being part of the Linux Foundation and most of the Container tech being Linux based that somehow Windows is on the outside. Certainly with the vast selection of OSS projects that make up the CNCF ecosystem there are some with "bad attitude" and still think they are fighting the Linux Wars of  the previous millennium but for the most part everyone is pro-container more than anything else and there is a positive attitude to see software work on both Linux based Containers and Windows based Containers.

 

More to come...i.e. Microsoft is a major player on the board of the Linux Foundation and the CNCF and both Operating Systems are working together to share common code and improve XPLAT.

 

 

Configuring Docker for Windows and Kubernetes on your Windows Desktop

 

 

Once everything you do is architect-ed into containers, then deployment to local on-premise or in the cloud is identical and flexible.

 

The VM then becomes just a host for Containers and you need in theory only one or zero VMs or maybe 10 depending on you you might emulate a cluster locally.

 

Every major industry player has finally signed on to this system under the umbrella of the CNCF. The only confusing part is the name, in that "Cloud Native" just means Containers orchestrated by Kubernetes and is perfectly happy to be on-premise with no cloud but also adapts without change to any cloud spin-up you want to do.

 

The easiest way to play with it is download the latest "Docker for Windows" (get the Edge Version which also contains Kubernetes). You need a Hyper-V enabled Windows 10. https://store.docker.com/editions/community/docker-ce-desktop-windows

 

 

The convenience of running multiple Linux and Windows VMs on an incredibility efficient and powerful Hypervisor produces a platform that drives the progress of both Windows and Linux simultaneously. The cross-pollination between the two O/S constantly increases. For example, a major convergence of the Windows and Linux networking stack to permit the embedding of Docker and Kubernetes support into Windows, Linux and MacOS...

 

http://blog.kubernetes.io/2017/09/windows-networking-at-parity-with-linux.html

 

 

Link to comment
Share on other sites

  • 0

Setting Up a Kubernetes Test System

 

In 2019, when you say the word "server" the Cloud Native Computing Foundation is the authentic prescriptive architecture for EVERYTHING.

 

A minimum config for a modern setup is 3 servers, probably one or two more when you take data persistence and backup into account.

 

The cheapest way to to make a "newbie starter" version of the correct architecture is to use VMs to emulate the physical servers you would buy if you could. It is a workable approach.

 

Also, the world has moved past VM's in the manner that you are thinking about them. They are just a holding pattern for Docker Containers. And yes a "proper" architecture has everything isolated into it's own container. You don't start an application, you start one or more containers as a "Pod" that can be replicated and multiplied as needed ad infinitum with the insanely complex management of the life cycle of these entities being managed by Kubernetes.

 

https://kubernetes.io/blog/2015/11/creating-a-raspberry-pi-cluster-running-kubernetes-the-shopping-list-part-1/

 

 

Link to comment
Share on other sites

  • 0

Ok, I got bored after the first post before getting to the links.  

 

No offense, but there wasn't any meat in it.  

 

Do you have a tldr version of this other than 3 servers minimum.

 

For example why the change in thought process, is this really a change in thought process over say a standard vm infrastructure that you should have been doing in 2008, or is this just rhetoric to support whatever cause.  

 

Just seems like a lot of technobabble around something that you should have been considering 10+ years ago.

Link to comment
Share on other sites

  • 0
2 hours ago, sc302 said:

Ok, I got bored after the first post before getting to the links.  

 

No offense, but there wasn't any meat in it.  

 

Do you have a tldr version of this other than 3 servers minimum.

 

For example why the change in thought process, is this really a change in thought process over say a standard vm infrastructure that you should have been doing in 2008, or is this just rhetoric to support whatever cause.  

 

Just seems like a lot of technobabble around something that you should have been considering 10+ years ago.

From the first post:

 

"Updating in Progress - This post and the few following ones might look incoherent for a while (a few days perhaps)"

 

I was starting with stuff I had posted here and there. And that might end up being more work for me than another approach but it needs to converge into something understandable. "A few days" might extend to a few weeks at this point.

 

That being said, I don't understand your questions.

 

Kubernetes has won a battle that played out over the last few years and every last major industry hold-out is now paying enormous money for a seat on the board at the CNCF

 

I decided not to describe any history or summary of "the wars" and just want to lay out a rough sketch of what it is and how to look into using it.

 

The only thing that might prove to be different with my approach is an attempt to examine how small you can go and whether you can get perceived benefits in smaller organizations that don't intend to expand and won't ever need to scale easily. I suspect in advance that I won't find anything there that inspires anyone to action.

 

Sure it's all boring as heck and even after I add some diagrams etc it is still going to be a dry subject.

 

 

 

Link to comment
Share on other sites

  • 0

It is all dry.  It is how it is presented to make it interesting.  

 

Can't say that I was very interested in the topic to begin with, but being that you mentioned it I figure it was worth something.  

 

Quote

Kubernetes has won a battle that played out over the last few years and every last major industry hold-out is now paying enormous money for a seat on the board at the CNCF

One would have to know that there was a battle or why the CNCF has any merit to anything related to anything.  It seems more devops driven than sysops.  And again my questions remain, I will put it more simply "So what?  How is this useful?  How does it help me or my business?"  

 

It is a lot of information and you obviously have some sort of passion for this.  This technology is something that you are either utilizing or want to utilize and have done a great deal to either support or learn about.  When going about this, you may want to ELI5 the information given.   Obviously it is a work in progress, what is going to keep me coming back to check up on this work? 

 

I can probably say that being that I haven't heard about it, it does not pertain even a little bit to me or what I do, even though I am diving into SaaS and IaaS with partners, and why I state this seems more DevOps not SysOps and why you would put this under programming....perhaps nothing more to see here.

 

Link to comment
Share on other sites

  • 0
3 hours ago, sc302 said:

It is all dry.  It is how it is presented to make it interesting.  

 

Can't say that I was very interested in the topic to begin with, but being that you mentioned it I figure it was worth something.  

 

One would have to know that there was a battle or why the CNCF has any merit to anything related to anything.  It seems more devops driven than sysops.  And again my questions remain, I will put it more simply "So what?  How is this useful?  How does it help me or my business?"  

 

It is a lot of information and you obviously have some sort of passion for this.  This technology is something that you are either utilizing or want to utilize and have done a great deal to either support or learn about.  When going about this, you may want to ELI5 the information given.   Obviously it is a work in progress, what is going to keep me coming back to check up on this work? 

 

I can probably say that being that I haven't heard about it, it does not pertain even a little bit to me or what I do, even though I am diving into SaaS and IaaS with partners, and why I state this seems more DevOps not SysOps and why you would put this under programming....perhaps nothing more to see here.

 

At the moment as already stated, it is just a cut n paste from other posts I made at Neowin. And I made those posts out of a respect for a reality we will all have to live with.

 

If it is DevOps then Programming is the correct category unless Neowin adds a lot of new categories. Not that I am a fan of "DevOps" or "SysOps" or the new trendier "AIOps" as being meaningful in any significant manner, but meaning never stopped branding and fashion...

 

Maybe when this effort is done, it will read better, and explain better or maybe not.

 

CNCF is a programming thing, an application architecture blueprint, a deployment blueprint, a server hardware prescription that essentially replaces VMs with Containers and a common set of standards for all computing for some time to come and in terms of the past it is a minor miracle that it came together under a single umbrella with 100% uptake. If you are not aware of the tentacles of this technology reaching everywhere, someday you will be. I should point out that I absolutely hated the design and architecture and the obvious self-promotion of Google's Borg system into Kubernetes and I was rooting for the other players to win. They lost. Google won. This is what humanity will use everywhere going forward and I might as well get used to it, turn my negative thoughts into unbiased neutrality, inform people, and figure out ways to make it less painful.

 

So who is missing here today, nobody... https://www.cncf.io/about/members/

 

I have already stated that I hope to investigate how it could benefit small operations that don't need to scale, or be demand driven or benefit from the serious rigors of the application architecture involved. There is a good chance I won't be successful in either demonstrating it or explaining it and I certainly don't know the answer in advance.

 

Given the negative thoughts you have freely expressed, I would agree with you that there is no point for you to return until it is complete perhaps weeks from now and most likely even then you will find nothing to like. I don't think it is a likable technology and Google wants to sledgehammer you into using their "Go" programming language and that seems to be working for them to some extent and, if I don't work hard to be neutral, I could really get started on a rant...

 

 

 

Link to comment
Share on other sites

  • 0

I was trying to give constructive criticism.  But I am no longer curious and am done with this.  Perhaps when you have it complete it will make more sense but if it doesn’t I won’t ask. 

Link to comment
Share on other sites

  • 0
1 hour ago, sc302 said:

I was trying to give constructive criticism.  But I am no longer curious and am done with this.  Perhaps when you have it complete it will make more sense but if it doesn’t I won’t ask. 

Based on your feedback, I have updated the current warning in the main topic to hopefully make it a bit more clear that the effort is an incoherent cut n paste mess.

 

Thanks for your input, I appreciate that you took some time to help out. I will try to address your concerns as the material gets fleshed out.

 

To any readers here, I welcome (and would hugely be grateful for) actual contributions of material on the subject, in the form of writings and thoughts and references and links and if anyone would like to summarize the politics and history of how this situation came to be, I will add that in, but for me I won't do that. No amount of complaining to the titans of industry is going to change anything at this point. Google's Borg has assimilated!

 

 

 

 

Link to comment
Share on other sites

This topic is now closed to further replies.