SuccessBot Says
clarkb 1 : infra added city cloud to the pool of test nodes.
pabelanger 2 : opensuse-422-infracloud-chocolate-8977043 launched by nodepool.
All: 3
A devstack review 4 that adds a new etcd3 service.
Two options to enable the DLM use case with Tooz (for eventless based services) 5 6
Full thread: 7
Do We Want to be Publishing Binary Container Images?
During the Forum, the discussion on collaboration between various teams building or consuming container images.
Decide how to publish images from the various teams to docker hub or other container registries.
The community has refrained from publishing binary packages in other formats such as debs and RPMs. Instead we have left this to the responsibility of the downstream consumers to build production packages.
This would require more tracking of upstream issues (bugs, CVEs, etc) to ensure the images are updated as needed.
Given our security and stable team resources, this might not be a good idea at this time.
Kolla is interested in doing this for daily builds. Everything is licensed with ASL which gives no guarantees.
Even if you mark something to not be used in production, people still use it. Take the recent user survey with DevStack being used in production.
Kolla today publishes build instructions. Manually every release they provide built containers.
Built containers would run through our CI gate, so others don’t have to have a local CI build pipeline.
Things we publish to Pypi are different from this proposal:
The formats published by pypi are source formats (sdist) and developer friend but production ready format (wheel).
Most of our services are not packaged and published to PyPi. The libraries are to make them easy to consume in our CI.
The artifacts in PyPi contain references to dependencies, the dependencies are not built into the packages themselves.
Iterating on the infra-spec review for publishing to DockerHub has started 8
Full thread: 9
RFC Cross Project Request ID Tracking
In the logging Forum session, it was brought up how much effort operators are having to put into reconstructing flows for things like server boot when they go wrong.
Jumping from service to service, the request-id is reset to something new.
Being able to query in elastic search for the same request-id in communication between services would be useful.
There is a concern of trusting the request-id on the wire, because it’s coming from a random user.
We have a new concept of “service users” which are set of higher privilege services that we are using to wrap user requests.
Basic idea is:
services will optionally take an inbound X-OpenStack-Request-ID which we’ll strongly validate req-$uuid format.
They will continue to generate one as well.
When the context is built we’ll check the service user was involved, and if not, reset the request-id to the local generated one.
Both request-ids will be logged.
Python clients and callers will need to be augmented to pass the request-id in on requests.
Servers will opt into calling other services this way.
Oslo spec for this has been merged 10.
Full thread: 11
Can We Stop Global Requirements Update (Cont.)
Gnocchi has gate issues with Babel this time. Julien plans to remove all oslo dependencies over the next few months.
The project Cotyledon was presented at some summit ago as an alternative to oslo.service and getting rid of eventless. The library lives under the telemetry umbrella for now.
The project doesn’t live under oslo so that it’s encouraged for the greater python ecosystem to adopt and help maintain it.
Octavia is also using Cotyledon.
Full thread: 12
Revised Postgresql Deprecation Patch for Governance
In the Forum session we agreed to the following:
Explicitly warn in operator facing documentation Postresql is less supported than MySQL.
Sure is the process of investigating migration from Postgresql to Gallera for future versions of OpenStack products.
TC governance patch is updated 13.
Current sticking points:
It’s important that the operator community largely is already in one camp or not.
Future items listed that are harder are important enough to justify a strict trade off here.
It’s ok to have the proposal have a firm lean in tone, even though it’s set of concrete actions are pretty reversible and don’t commit to future removal of Postgresql.
What has been raised as being hard by an abstraction layer like SQLAlchemy:
OpenStack services taking a more active role in managing DBMS.
See Active or passive role with our database layer summary below for this discussion.
The ability to have zero down time upgrade for services such as Keystone.
Expand/contract with code and carefully dancing around the existence of two schema concepts simultaneously (e.g. Nova and Neutron).
This shouldn’t be a problem because we use alembic or sqlalchemy-migrate to abstract away ALTER TABLE types.
Expand/contract using server side triggers to reconcile the two schema. This is more difficult because there is no abstraction layer that exists in SQLAlchemy. It could be feasible to build one specific to OpenStack.
Consistent UTF-8 4 & 5 byte support in our APIs
Unicode itself only needs 4 bytes and that is as far as any database supports right now. This problem has been solved by SQLAlchemy well before Python 3 existed.
The requirement that Postgresql libraries are compiled for new users trying to just run unit tests.
New developers who aren’t concerned with Postgresql don’t have to run these tests.
OpenStack went all the way with Kilo using the native python-MySQL driver which required compiling.
This is OpenStack. We are the glue to thousands of c-compiled libraries and packages.
Consistency around case sensitivity collation.
MySQL is defaulting to case-insensitive.
Postgresql almost has no support for case-insensitive.
SQLAlchemy supports things like ilike().
String datatype in SQLAlchemy guarantees case-insensitive.
Top concerns that remain:
A1) Do not surprise users late by them only finding out they are on less traveled once they are so deeply committed. It’s fine for users to choose the path, as long as they are informed they are going to need to be more self reliant.
A2) Do not prevent features like zero downtime in Keystone making forward progress with a MySQL only solution.
Orthogonal concerns:
B1) Postgresql was chosen by people in the past, maybe more than we realized, that’s real users we don’t want to throw under the bus. Whole sale delete is off the table. There’s no clear path off and missing data of who’s on it.
B2) The upstream code isn’t so irreparably changed (e.g. delete the SQLAlchemy layer) that it’s not possible to have alternative database backends.
The current proposal 13 addresses A1 and B1.
Full thread: 14
[1] – http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-05-24.log.html
[2] – http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-05-24.log.html
[3] – https://wiki.openstack.org/wiki/Successes
[4] – https://review.openstack.org/#/c/445432/
[5] – https://review.openstack.org/#/c/466098/
[6] – https://review.openstack.org/#/c/466109/
[7] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#117370
[8] – https://review.openstack.org/447524
[9] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
[10] – https://review.openstack.org/#/c/464746/
[11] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116619
[12] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116736
[13] – https://review.openstack.org/#/c/427880/
[14] – http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116642
Quelle: openstack.org
Published by