2023-12-13T14:55:25+00:00https://www.jistr.com/blog/tags/openstack/feed.xmljistr.com - blog posts tagged ‘openstack’Website and blog of Jiří Stránský | jistrJiří Stránskýhttps://www.jistr.com/about/Introduction to OS Migrate, the OpenStack parallel cloud migration toolbox2021-07-12T00:00:00+00:002021-07-12T00:00:00+00:00https://www.jistr.com/blog/introduction-to-os-migrate<p><a href="https://github.com/os-migrate/os-migrate">OS Migrate</a> is a toolbox
for content migration (workloads and more) between
<a href="https://www.openstack.org/">OpenStack</a> clouds. Let’s dive into why
you’d use it, some of its most notable features, and a bit of how it
works.</p>
<h2 id="the-why">The Why</h2>
<p>Why move cloud content between OpenStacks? Imagine these situations:</p>
<ul>
<li>
<p>Old cloud hardware is obsolete, you’re buying new. A new green field
deployment will be easier than gradual replacement of hardware in
the original cloud.</p>
</li>
<li>
<p>You want to make fundamental changes to your OpenStack deployment,
that would be difficult or risky to perform on a cloud which is
already providing service to users.</p>
</li>
<li>
<p>You want to upgrade to a new release of OpenStack, but you want to
cut down on associated cloud-wide risk, or you can’t schedule
cloud-wide control plane downtime.</p>
</li>
<li>
<p>You want to upgrade to a new release of OpenStack, but the cloud
users should be given a choice when to stop using the old release
and start using the new.</p>
</li>
<li>
<p>A combination of the above.</p>
</li>
</ul>
<p>In such situations, running (at least) two clouds in parallel for a
period of time is often the preferable path. And when you run parallel
clouds, perhaps with the intention of decomissioning some of them
eventually, a tool may come in handy to copy/migrate the content that
users have created (virtual networks, routers, security groups,
machines, block storage, images etc.) from one cloud to another. This
is what OS Migrate is for.</p>
<h2 id="the-pitch">The Pitch</h2>
<p>Now we know OS Migrate copies/moves content from one OpenStack to
another. But there is more to say. Some of the design decisions that
went into OS Migrate should make it a tool of choice:</p>
<ul>
<li>
<p><strong>Uses standard OpenStack APIs.</strong> You don’t need to install any
plugins into your clouds before using OS Migrate, and OS Migrate
does not need access to the backends of your cloud (databases etc.).</p>
</li>
<li>
<p><strong>Runnable with tenant privileges.</strong> For moving tenant-owned
content, OS Migrate only needs tenant credentials (not
administrative credentials). This naturally reduces risks associated
with the migration.</p>
<p>If desired, cloud tenants can even use OS Migrate on their
own. Cloud admins do not necessarily need to get involved.</p>
<p>Admin credentials are only needed when the content being migrated
requires admin privileges to be created (e.g. public Glance images).</p>
</li>
<li>
<p><strong>Transparent.</strong> The metadata of exported content is in
human-readable YAML files. You can inspect what has been exported
from the source cloud, and tweak it if necessary, before executing
the import into the destination cloud.</p>
</li>
<li>
<p><strong>Stateless.</strong> There is no database in OS Migrate that could get out
of sync with reality. The source of migration information are the
human readable YAML files. ID-to-ID mappings are not kept,
entry-point resources are referred to by names.</p>
</li>
<li>
<p><strong>Idempotent.</strong> In case of an issue, fix the root cause and re-run,
be it export or import. OS Migrate has mechanisms against duplicit
exports and duplicit imports.</p>
</li>
<li>
<p><strong>Cherry-pickable.</strong> There’s no need to migrate all content with OS
Migrate. Only migrate some tenants, or further scope to some of
their resource types, or further limit the resource type
exports/imports by a list of resource names or regular expression
patterns. Use as much or as little of OS Migrate as you need.</p>
</li>
<li>
<p><strong>Implemented as an Ansible collection.</strong> When learning to work with
OS Migrate, most importantly you’ll be learning to work with
Ansible, an automation tool used across the IT industry. If you
already know Ansible, you’ll feel right at home with OS Migrate.</p>
</li>
</ul>
<h2 id="the-how">The How</h2>
<p>If you want to use OS Migrate, the best thing I can do here is point
towards the <a href="https://os-migrate.github.io/os-migrate/user/README.html">OS Migrate User
Documentation</a>. If
you just want to get a glimpse for now, read on.</p>
<p>As OS Migrate is an Ansible collection, the main mode of use is
setting Ansible variables and running playbooks shipped with the
collection.</p>
<p>Should the default playbooks not fit a particular use case, a
technically savvy user could also utilize the collection’s roles and
modules as building blocks to craft their own playbooks. However, as
i’ve wrote above in the point about cherry-picking features, we’ve
tried to make the default playbooks quite generically usable.</p>
<p>In OS Migrate we differentiate between two main migration types with
respect to what resources we are migrating: pre-workload migration,
and workload migration.</p>
<h3 id="pre-workload-migration">Pre-workload migration</h3>
<p>Pre-workload migration focuses on content/resources that can be copied
to the destination cloud without affecting workloads in the source
cloud. It can be typically done with little timing pressure, ahead of
time before migrating workloads. This includes resources like tenant
networks, subnets, routers, images, security groups etc.</p>
<p>The content is serialized as editable YAML files to the Migrator host
(the machine running the Ansible playbooks), and then resources are
created in the destination according to the YAML serializations.</p>
<p><img src="https://www.jistr.com/assets/images/posts/2021-07-12-introduction-to-os-migrate/pre-workload-data-flow.svg" alt="Pre-workload migration data flow" style="min-width: 60%; max-width: 100%;" /></p>
<h3 id="workload-migration">Workload migration</h3>
<p>Workload migration focuses on copying VMs and their attached Cinder
volumes, and on creating floating IPs for VMs in the destination
cloud. The VM migration between clouds is a “cold” migration. VMs
first need to be stopped and then they are copied.</p>
<p>With regards to the boot disk of the VM, we support two options:
either the destination VM’s boot disk is created from a Glance image,
or the source VM’s boot disk snapshot is copied into the destination
cloud as a Cinder volume and the destination VM is created as
boot-from-volume. There is a <a href="https://os-migrate.github.io/os-migrate/user/migration-params-guide.html#workload-migration-parameters">migration
parameter</a>
controlling this behavior on a per-VM basis. Additional Cinder volumes
attached to the source VM are copied.</p>
<p>The data path for VMs and volumes is slightly different than in the
pre-workload migration. Only metadata gets exported onto the Migrator
host. For moving the binary data, special VMs called <em>conversion
hosts</em> are deployed, one in the source and one in the
destination. This is done for performance reasons, to allow the VMs’
and volumes’ binary data to travel directly from cloud to cloud
without going through the (perhaps external) Migrator host as an
intermediary.</p>
<p><img src="https://www.jistr.com/assets/images/posts/2021-07-12-introduction-to-os-migrate/workload-data-flow.svg" alt="Workload migration data flow" style="min-width: 100%; max-width: 100%;" /></p>
<h2 id="the-pointers">The Pointers</h2>
<p>Now that we have an overview of OS Migrate, let’s finish with some
links where more info can be found:</p>
<ul>
<li>
<p><a href="https://os-migrate.github.io/os-migrate/">OS Migrate Documentation</a>
is the primary source of information on OS Migrate.</p>
</li>
<li>
<p><a href="https://app.element.io/#/room/#os-migrate:matrix.org">OS Migrate Matrix Channel</a>
is monitored by devs for any questions you might have.</p>
</li>
<li>
<p><a href="https://github.com/os-migrate/os-migrate/issues">Issues on Github</a>
is the right place to report any bugs, and you can ask questions
there too.</p>
</li>
<li>
<p>If you want to contribute (code, docs, …), see
<a href="https://os-migrate.github.io/os-migrate/devel/README.html">OS Migrate Developer Documentation</a>.</p>
</li>
</ul>
<p>Have a good day!</p>Jiří Stránskýhttps://www.jistr.com/about/OS Migrate is a toolbox for content migration (workloads and more) between OpenStack clouds. Let’s dive into why you’d use it, some of its most notable features, and a bit of how it works.
{# TODO: Assign post images #}
Upgrading Ceph and OKD (OpenShift Origin) with TripleO2018-08-15T00:00:00+00:002018-08-15T00:00:00+00:00https://www.jistr.com/blog/upgrading-ceph-and-okd-with-tripleo<p>In OpenStack’s Rocky release, TripleO is transitioning towards a
method of deployment we call <em>config-download</em>. Basically, instead of
using Heat to deploy the overcloud end-to-end, we’ll be using Heat
only to manage the hardware resources and Ansible tasks for individual
composable services. Execution of software configuration management
(which is Ansible on the top level) will no longer go through Heat, it
will be done directly. If you want to know details, i recommend
watching James Slagle’s <a href="https://www.youtube.com/watch?v=-6ojHT8P4RE">TripleO Deep Dive about
config-download</a>.</p>
<p>Transition towards <em>config-download</em> affects also services/components
which we deploy by embedding external installers, like Ceph or OKD
(aka OpenShift Origin). E.g. previously we’ve deployed Ceph via a Heat
resource, which created a Mistral workflow, which executed
ceph-ansible. This is no longer possible with config-download, so we
had to adapt the solution for external installers.</p>
<h2 id="deployment-architecture">Deployment architecture</h2>
<p>Before talking about upgrades, it is important to understand how we
deploy services with external installers when using
config-download.</p>
<p>Deployment using external installers with config-download has been
developed during OpenStack’s Queens release cycle for the purpose of
installing Kubernetes and OpenShift Origin. In Rocky release,
installation of Ceph and Skydive services transitioned to using the
same method (shout out to Giulio Fidente and Sylvain Afchain who
ported those services to the new method).</p>
<p>The general solution is described in my earlier
<a href="https://www.jistr.com/blog/2017-11-21-kubernetes-in-tripleo/#architecture">Kubernetes in TripleO</a>
blog post. I recommend being somewhat familiar with that before
reading on.</p>
<h2 id="upgrades-architecture">Upgrades architecture</h2>
<p><em>In OpenStack, and by extension in TripleO, we distinguish between
minor updates and major upgrades, but with external installers the
distinction is sometimes blurred. The solution described here was
applied to both updates and upgrades. We still make a distinction
between updates and upgrades with external installers in TripleO
(e.g. by having two different CLI commands), but the architecture is
the same for both. I will only mention upgrades in the text below for
the sake of brevity, but everything described applies for updates
too.</em></p>
<p>It was more or less given that we would use Ansible tasks for upgrades
with external installers, same as we already use Ansible tasks for
their deployment. However, we had <a href="http://lists.openstack.org/pipermail/openstack-dev/2018-July/132102.html">two possible
approaches</a>
suggest themselves. Option A was to execute service’s upgrade tasks
and then immediately its deploy tasks, favoring service upgrade
procedure which reuses a significant part of that service’s deployment
procedure. Option B was to execute only upgrade tasks, giving more
separation between the deployment and upgrade procedures, at the risk
of producing repetitive code in the service templates.</p>
<p>We went with option A (upgrade procedure includes re-execution of
deploy tasks). The upgrade tasks in this architecture are mainly meant
to set variables which then affect what the deploy tasks do
(e.g. select a different Ansible playbook to run). Note that with this
solution, it is still possible to fully skip the deploy tasks if
needed (using variables and <code>when</code> conditions), but it optimizes for
maximum reuse between upgrade and deployment procedures.</p>
<p><img src="https://www.jistr.com/assets/images/posts/2018-08-15-upgrading-ceph-and-okd-with-tripleo/external-upgrade.png" alt="Upgrades with external installers" /></p>
<h3 id="implementation-for-ceph-and-okd">Implementation for Ceph and OKD</h3>
<p>With the focus on reuse of deploy tasks, and both ceph-ansible and
openshift-ansible being suitable for such approach, implementing
upgrades via the architecture described above didn’t require much
code.</p>
<p>Feel free to skim through the <a href="https://review.openstack.org/583321">Ceph
upgrade</a> and <a href="https://review.openstack.org/589873">OKD
upgrade</a> patches to get an idea
of how the upgrades were implemented.</p>
<h2 id="cli-and-workflow">CLI and workflow</h2>
<p>In CLI, the external installer upgrades got a new command <code>openstack
overcloud external-upgrade run</code>. (For minor version updates it is
<code>openstack overcloud external-update run</code>, service template authors
may decide if they want to distinguish between updates and upgrades,
or if they want to run the same code.)</p>
<p>The command is a part of the normal upgrade workflow, and should be
run between <code>openstack overcloud upgrade prepare</code> and <code>openstack
overcloud upgrade converge</code>. It is recommended to execute it after
<code>openstack overcloud upgrade run</code>, which corresponds to the place
within upgrade workflow where we have been upgrading Ceph.</p>
<p><em>After introducing the new <code>external-upgrade run</code> command we have
removed <code>ceph-upgrade run</code> command. This means that Ceph is no longer
a special citizen in the TripleO upgrade procedure, and uses generic
commands and hooks available to any other service.</em></p>
<h3 id="separate-execution-of-external-installers">Separate execution of external installers</h3>
<p>There might be more services utilizing external installers within a
single TripleO-managed environment, and the operator might wish to
upgrade them separately. <code>openstack overcloud external-upgrade run</code>
would upgrade all of them at the same time.</p>
<p>We started adding Ansible tags to the external upgrade and deploy
tasks, allowing us to select which installers we want to run. This way
<code>openstack overcloud external-upgrade run --tags ceph</code> would only run
ceph-ansible, similarly <code>openstack overcloud external-upgrade run
--tags openshift</code> would only run openshift-ansible. This also allows
fine tuning the spot in the upgrade workflow where operator wants to
run a particular external installer upgrade (e.g. before or after
upgrade of natively managed TripleO services).</p>
<p>A full upgrade workflow making use of these possibilities could then
perhaps look like this:</p>
<pre><code class="language-bash">openstack overcloud upgrade prepare <args>
openstack overcloud external-upgrade run --tags openshift
openstack overcloud upgrade run --roles Controller
openstack overcloud upgrade run --roles Compute
openstack overcloud external-upgrade run --tags ceph
openstack overcloud upgrade converge <args>
</code></pre>Jiří Stránskýhttps://www.jistr.com/about/In OpenStack’s Rocky release, TripleO is transitioning towards a method of deployment we call config-download. Basically, instead of using Heat to deploy the overcloud end-to-end, we’ll be using Heat only to manage the hardware resources and Ansible tasks for individual composable services. Execution of software configuration management (which is Ansible on the top level) will no longer go through Heat, it will be done directly. If you want to know details, i recommend watching James Slagle’s TripleO Deep Dive about config-download.
{# TODO: Assign post images #}
OpenShift Origin in TripleO with multiple masters and multiple nodes2018-01-04T00:00:00+00:002018-01-12T00:00:00+00:00https://www.jistr.com/blog/openshift-origin-in-tripleo<p>Earlier i wrote about how to
<a href="https://www.jistr.com/blog/2017-11-21-kubernetes-in-tripleo/">deploy vanilla Kubernetes as a TripleO service</a>.
This article follows suit in describing how to deploy OpenShift Origin
in the same way, also 3 masters and 3 nodes, using TripleO Quickstart
as the driving mechanism. Please be aware that these posts describe a
work-in-progress development environment setup, not a polished final
end user experience.</p>
<h2 id="architecture">Architecture</h2>
<p>We use exactly the same architectural approach as described earlier in
the Kubernetes post. Please read the
<a href="https://www.jistr.com/blog/2017-11-21-kubernetes-in-tripleo/#architecture">Architecture section of the Kubernetes post</a>
if you are interested in the general perspective.</p>
<h3 id="integration-of-openshift-ansible">Integration of openshift-ansible</h3>
<p>We install OpenShift Origin by having integrated with the
openshift-ansible installer. We use TripleO’s <code>external_deploy_tasks</code>
to generate the necessary input files for openshift-ansible, and then
we execute it. The files that we generate are:</p>
<ul>
<li>
<p>an inventory,</p>
</li>
<li>
<p>a playbook, configuring NetworkManager and then including the
<code>byo/config.yml</code> playbook from openshift-ansible (“byo” stands for
“bring your own hosts”, TripleO can take care of provisioning the
hosts),</p>
</li>
<li>
<p>a file with Ansible variables for openshift-ansible.</p>
</li>
</ul>
<p>If you want to explore the code, see the service template
<a href="https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/extraconfig/services/openshift-master.yaml?id=15b279f4ce882b9b1a6cdcf9f366aa7122e9496b">openshift-master.yaml as of 12th January 2018</a>
There’s also
<a href="https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/extraconfig/services/openshift-worker.yaml?id=15b279f4ce882b9b1a6cdcf9f366aa7122e9496b">openshift-worker.yaml</a> service
template, which tags nodes to be recognized by the inventory generator
as workers, and sets up worker node firewall rules.</p>
<h2 id="deployment">Deployment</h2>
<h3 id="prepare-the-environment">Prepare the environment</h3>
<p>The assumed starting point for will be having a deployed undercloud
with 6 virtual baremetal nodes defined and ready for use.</p>
<p>There are several paths to do this with TripleO Quickstart, the
easiest one is probably to deploy full 3 controller + 3 compute
environment using <code>--nodes config/nodes/3ctlr_3comp.yml</code>, and then
delete the overcloud stack. If your virt host doesn’t have enough
capacity for that many VMs, you can use a smaller configuration,
e.g. <code>1ctlr_1comp.yml</code> or just <code>1ctlr.yml</code>.</p>
<p>For detailed information how to deploy with Quickstart, please refer
to
<a href="https://docs.openstack.org/tripleo-quickstart/latest/getting-started.html">TripleO Quickstart docs</a>.</p>
<h3 id="deploy-the-overcloud">Deploy the overcloud</h3>
<p>Let’s prepare <code>extra-oooq-vars.yml</code> file. It’s a file with Quickstart
variables, so it will have to be <em>on the host where you run
Quickstart</em>. The contents will be as follows:</p>
<pre><code class="language-yaml"># use t-h-t with our cherry-picks
overcloud_templates_path: /home/stack/tripleo-heat-templates
# use NTP, clustered systems don't like time skew
ntp_args: --ntp-server pool.ntp.org
# make validation errors non-fatal
validation_args: ''
# network config in the featureset is for CI, override it back to defaults
network_args: -e /home/stack/net-config-noop.yaml
# deploy with config-download mechanism, we'll execute the actual
# software deployment via ansible subsequently
config_download_args: >-
-e /home/stack/tripleo-heat-templates/environments/config-download-environment.yaml
--disable-validations
--verbose
# do not run the workflow
deploy_steps_ansible_workflow: false
</code></pre>
<p>And <code>/home/stack/net-config-noop.yaml</code> file (referenced above) will
have to be <em>on the undercloud</em>, and it has these contents:</p>
<pre><code class="language-yaml">resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-noop.yaml
OS::TripleO::Compute::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-noop.yaml
</code></pre>
<p>For OpenShift Origin specifcally, it’s important to set the
controllers’ NIC config to <code>net-config-noop.yaml</code> to avoid depending
on OVS. (The default <code>net-config-bridge.yaml</code> would create a <code>br-ex</code>
OVS bridge. Then openshift-ansible would stop OVS on baremetal and
start OVS in a container. Given that the default route would already
go through <code>br-ex</code> at that point, stopping baremetal OVS could
effectively “brick” the controllers in terms of network traffic.)</p>
<p>Now let’s reuse the undercloud deployed previously by Quickstart, and
deploy the overcloud Heat stack. This could be done with
<code>quickstart.sh</code> too, but personally i prefer running
<code>ansible-playbook</code> for more direct control:</p>
<pre><code class="language-bash"># run this where you run Quickstart (likely not the undercloud)
# VIRTHOST must point to the machine that hosts your Quickstart VMs,
# edit this if necessary
export VIRTHOST=$(hostname -f)
# WORKSPACE must point to your Quickstart workspace directory,
# edit this if necessary
export WORKSPACE=$HOME/.quickstart
source $WORKSPACE/bin/activate
export ANSIBLE_ROLES_PATH=$WORKSPACE/usr/local/share/ansible/roles:$WORKSPACE/usr/local/share/tripleo-quickstart/roles
export ANSIBLE_LIBRARY=$WORKSPACE/usr/local/share/ansible:$WORKSPACE/usr/local/share/tripleo-quickstart/library
export SSH_CONFIG=$WORKSPACE/ssh.config.ansible
export ANSIBLE_SSH_ARGS="-F ${SSH_CONFIG}"
ansible-playbook -v \
-i $WORKSPACE/hosts \
-e local_working_dir=$WORKSPACE \
-e virthost=$VIRTHOST \
-e @$WORKSPACE/config/release/tripleo-ci/master.yml \
-e @$WORKSPACE/config/nodes/3ctlr_3comp.yml \
-e @$WORKSPACE/config/general_config/featureset033.yml \
-e @extra-oooq-vars.yml \
$WORKSPACE/playbooks/quickstart-extras-overcloud.yml
</code></pre>
<p>Now let’s install openshift-ansible <em>on the undercloud</em>. For
development purposes i get the 3.6 branch from source, but it can be
installed via RPM too:</p>
<pre><code class="language-bash">sudo yum -y install centos-release-openshift-origin36
sudo yum -y install openshift-ansible-playbooks
</code></pre>
<p>With the overcloud Heat stack created and openshift-ansible present,
we can fetch the overcloud software config definition and deploy it
with Ansible. In real use cases this can be done together with Heat
stack creation via <code>openstack overcloud deploy</code> command, but we’re
taking an explicit approach here:</p>
<pre><code class="language-bash"># clean any previous config downloads
rm -rf ~/config-download/tripleo*
# produce Ansible playbooks from Heat stack outputs
tripleo-config-download -s overcloud -o ~/config-download
# skip this in case you want to manually check fingerprints
export ANSIBLE_HOST_KEY_CHECKING=no
# deploy the software configuration of overcloud
ansible-playbook \
-v \
-i /usr/bin/tripleo-ansible-inventory \
~/config-download/tripleo-*/deploy_steps_playbook.yaml
</code></pre>
<p>This applies the software configuration, including installation of
OpenShift Origin via openshift-ansible.</p>
<h3 id="hello-origin-in-tripleo">Hello Origin in TripleO</h3>
<p>At the current stage, it’s best to ssh to an overcloud controller node
to manage the Origin cluster with <code>oc</code> or <code>kubectl</code>.</p>
<p>After smoke testing with e.g. <code>oc status</code>, you can try deploying
something on the Origin cluster, e.g. according to the instructions at
<a href="https://docs.openshift.org/3.6/getting_started/developers_cli.html">OpenShift Origin CLI Walkthrough</a>.</p>Jiří Stránskýhttps://www.jistr.com/about/Earlier i wrote about how to deploy vanilla Kubernetes as a TripleO service. This article follows suit in describing how to deploy OpenShift Origin in the same way, also 3 masters and 3 nodes, using TripleO Quickstart as the driving mechanism. Please be aware that these posts describe a work-in-progress development environment setup, not a polished final end user experience.
{# TODO: Assign post images #}
Kubernetes in TripleO with multiple masters and multiple nodes2017-11-21T00:00:00+00:002017-01-03T00:00:00+00:00https://www.jistr.com/blog/kubernetes-in-tripleo<p>TripleO already deploys OpenStack into containers. Going forward we
would like to integrate a container orchestration engine. We have been
experimenting with Kubernetes and OpenShift. Let’s look at deploying
vanilla Kubernetes in this blog post. We’ll deploy a Kubernetes
cluster of 3 masters + 3 nodes, utilizing the TripleO composable
services framework and the Kubespray installer, and drive the
deployment using TripleO Quickstart.</p>
<h2 id="architecture">Architecture</h2>
<p>First let’s skim through the architecture enhancement made to support
external installers within the new Ansible deployment mechanism.</p>
<p>We now allow the composable service templates to emit
<code>external_deploy_tasks</code>. These are Ansible tasks executed on the
undercloud. They can read the full Ansible inventory of the deployment
playbook when needed, which makes them fit for running complex
out-of-tree (meaning out-of-tripleo-heat-templates) installers like
Kubespray or Ceph-Ansible.</p>
<p>We had two options how to approach these external installers – either
pull them in via a playbook include directly to the <code>ansible-playbook</code>
process for overcloud deployment, or run a nested <code>ansible-playbook</code>
subprocess (e.g. for Kubespray) from the outer <code>ansible-playbook</code>
process. We went for the latter as it provides more control and can
execute also non-Ansible installers. The summary of important
differences between the approaches is in the following table:</p>
<table class="textual">
<thead>
<tr>
<th> </th>
<th>Playbook include</th>
<th>Subprocess</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Supported tools</strong></td>
<td>Ansible only</td>
<td>Any</td>
</tr>
<tr>
<td><strong>Inventory</strong></td>
<td>Inherited from outer playbook</td>
<td>Separate, must be generated</td>
</tr>
<tr>
<td><strong>Variables, tags</strong></td>
<td>Inherited from outer playbook</td>
<td>Separate, must be generated</td>
</tr>
<tr>
<td><strong>Progress logging</strong></td>
<td>Progressively each task</td>
<td>Whole process output at once</td>
</tr>
</tbody>
</table>
<p>The <code>external_deploy_tasks</code> are step-based, and are
interleaved between the “normal” deployment steps which currently run
Puppet and manage containers. This way we can decide in which
particular step we want an external installer to be run. The situation
is depicted (in a simplified way) in the following diagram:</p>
<p><img src="https://www.jistr.com/assets/images/posts/2017-11-21-kubernetes-in-tripleo/workflow.png" alt="Overcloud deployment workflow" /></p>
<h3 id="implementation-for-kubespray">Implementation for Kubespray</h3>
<p>In Kubespray’s case specifically, we use the <code>external_deploy_tasks</code>
to generate files needed by the Kubespray installer and then execute
the installer. The files used by Kubespray that we generate are:</p>
<ul>
<li>
<p>an inventory,</p>
</li>
<li>
<p>a simple playbook (including Kubespray’s <code>cluster.yml</code>),</p>
</li>
<li>
<p>a file with Ansible variables that configures Kubespray.</p>
</li>
</ul>
<p>If you want to explore the code, see the service template
<a href="https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/extraconfig/services/kubernetes-master.yaml?id=d6e5cc181b6f1a3d0cd00f28c8d7e19e1efc85d0#n58">kubernetes-master.yaml as of 20th November 2017</a>.
There’s also
<a href="https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/extraconfig/services/kubernetes-worker.yaml?id=d6e5cc181b6f1a3d0cd00f28c8d7e19e1efc85d0">kubernetes-worker.yaml</a> service
template, which is just used for tagging nodes to be recognized by the
inventory generator as workers, and for setting up worker node
firewall rules.</p>
<h2 id="deployment">Deployment</h2>
<h3 id="prepare-the-environment">Prepare the environment</h3>
<p>The assumed starting point for deploying Kubernetes will be having a
deployed undercloud with 6 virtual baremetal nodes defined and ready
for use.</p>
<p>There are several paths to do this with TripleO Quickstart, the
easiest one is probably to deploy full 3 controller + 3 compute
environment using <code>--nodes config/nodes/3ctlr_3comp.yml</code>, and then
delete the overcloud stack. If your virt host doesn’t have enough
capacity for that many VMs, you can use a smaller configuration,
e.g. <code>1ctlr_1comp.yml</code> or just <code>1ctlr.yml</code>.</p>
<p>For detailed information how to deploy with Quickstart, please refer
to
<a href="https://docs.openstack.org/tripleo-quickstart/latest/getting-started.html">TripleO Quickstart docs</a>.</p>
<h3 id="deploy-the-overcloud">Deploy the overcloud</h3>
<p>Let’s prepare <code>extra-oooq-vars.yml</code> file. It’s a file with Quickstart
variables, so it will have to be <em>on the host where you run
Quickstart</em>. The contents will be as follows:</p>
<pre><code class="language-yaml"># use t-h-t with our cherry-picks
overcloud_templates_path: /home/stack/tripleo-heat-templates
# specify NTP server this way. Improved in https://review.openstack.org/516958
extra_tht_config_args: --ntp-server pool.ntp.org
# make validation errors non-fatal
validation_args: ''
# network config in the featureset is for CI, override it back to defaults
network_args: -e /home/stack/net-config-defaults.yaml
# deploy with config-download mechanism, we'll execute the actual
# software deployment via ansible subsequently
config_download_args: >-
-e /home/stack/tripleo-heat-templates/environments/config-download-environment.yaml
--disable-validations
--verbose
# do not run the workflow
deploy_steps_ansible_workflow: false
</code></pre>
<p>And <code>/home/stack/net-config-defaults.yaml</code> will have to be <em>on the
undercloud</em>, and it has these contents:</p>
<pre><code class="language-yaml">resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-bridge.yaml
OS::TripleO::Compute::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-noop.yaml
</code></pre>
<p>Now let’s reuse the undercloud deployed previously by Quickstart, and
deploy the overcloud Heat stack. This could be done with
<code>quickstart.sh</code> too, but personally i prefer running
<code>ansible-playbook</code> for more direct control:</p>
<pre><code class="language-bash"># run this where you run Quickstart (likely not the undercloud)
# VIRTHOST must point to the machine that hosts your Quickstart VMs,
# edit this if necessary
export VIRTHOST=$(hostname -f)
# WORKSPACE must point to your Quickstart workspace directory,
# edit this if necessary
export WORKSPACE=$HOME/.quickstart
source $WORKSPACE/bin/activate
export ANSIBLE_ROLES_PATH=$WORKSPACE/usr/local/share/ansible/roles:$WORKSPACE/usr/local/share/tripleo-quickstart/roles
export ANSIBLE_LIBRARY=$WORKSPACE/usr/local/share/ansible:$WORKSPACE/usr/local/share/tripleo-quickstart/library
export SSH_CONFIG=$WORKSPACE/ssh.config.ansible
export ANSIBLE_SSH_ARGS="-F ${SSH_CONFIG}"
ansible-playbook -v \
-i $WORKSPACE/hosts \
-e local_working_dir=$WORKSPACE \
-e virthost=$VIRTHOST \
-e @$WORKSPACE/config/release/tripleo-ci/master.yml \
-e @$WORKSPACE/config/nodes/3ctlr_3comp.yml \
-e @$WORKSPACE/config/general_config/featureset026.yml \
-e @extra-oooq-vars.yml \
$WORKSPACE/playbooks/quickstart-extras-overcloud.yml
</code></pre>
<p>With the overcloud Heat stack created, we can now fetch the overcloud
software config definition and deploy it with Ansible. In the future
this will be neatly hidden behind CLI and WUI interface, but for now
it’s another WIP feature, so we’ll take a bit more explicit approach:</p>
<pre><code class="language-bash"># clean any previous config downloads
rm -rf ~/config-download/tripleo*
# produce Ansible playbooks from Heat stack outputs
tripleo-config-download -s overcloud -o ~/config-download
# skip this in case you want to manually check fingerprints
export ANSIBLE_HOST_KEY_CHECKING=no
# deploy the software configuration of overcloud
ansible-playbook \
-v \
-i /usr/bin/tripleo-ansible-inventory \
~/config-download/tripleo-*/deploy_steps_playbook.yaml
</code></pre>
<p>This applied the software configuration, including installation of
Kubernetes via Kubespray.</p>
<h3 id="hello-kubernetes-in-tripleo">Hello Kubernetes in TripleO</h3>
<p>For now it’s best to ssh to an overcloud controller node to manage
Kubernetes with <code>kubectl</code>. External HA access to the Kubernetes API
is <a href="https://review.openstack.org/516894">being worked on</a>.</p>
<p>After smoke testing with e.g. <code>kubectl get nodes</code>, you can try
deploying something on the Kubernetes cluster, e.g.
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/">Use a Service to Access an Application in a Cluster</a> tutorial
from the Kubernetes docs is a nice one.</p>Jiří Stránskýhttps://www.jistr.com/about/TripleO already deploys OpenStack into containers. Going forward we would like to integrate a container orchestration engine. We have been experimenting with Kubernetes and OpenShift. Let’s look at deploying vanilla Kubernetes in this blog post. We’ll deploy a Kubernetes cluster of 3 masters + 3 nodes, utilizing the TripleO composable services framework and the Kubespray installer, and drive the deployment using TripleO Quickstart.
{# TODO: Assign post images #}