Release Notes

v0.10.2

  • PostgreSQL patch upgrade from v9.5.4 to v9.5.5 for all service instances
  • Service broker always re-authenticates with Cloud Controller/UAA to avoid authentication tokens expiring

v0.10.1

Optional syslog

Syslog configuration is now actually optional. Previously, the form allowed you to not configure a syslog host:port endpoint but we didn’t actually think to… test it? That was awkward. So, now we both made it more explicit that you’re choosing to not ship logs to a syslog endpoint (the default “Local” option), and we actually help deploy Dingo PostgreSQL correctly so that it is ok with your decision. Previously, if you didn’t configure “Remote Syslog”, then the cell_z1 and cell_z2 servers would not start up successfully, with Docker daemon having been misconfigured on how it should be shipping logs. No more. Thanks to Ming Ling from Pivotal for helping to debug this.

dpg support tool

We’ve introduced a support tool dpg to help inspect the cluster, discover running service instances and which Cloud Foundry org/space they belong to, and more. dpg is installed and pre-configured for your platform administration pleasure on the router/0 VM’s root user:

bosh ssh router/0
sudo su -
dpg ls

To see help, run dpg without arguments.

Example commands to test drive the support tool:

dpg
dpg target
dpg ls

The first column are the Cloud Foundry service instance IDs, referenced as INSTANCE_ID in the dpg help above.

dgp status INSTANCE_ID

To create or delete service instances without Cloud Foundry API/CLI:

dpg create my-first-cluster
dpg ls
dpg raw /service/my-first-cluster
dpg raw /service/my-first-cluster/members

Once the cluster is up and running:

dpg superuser-psql my-first-cluster

This will provide a psql interactive console:

psql (9.4.5, server 9.5.3)
WARNING: psql major version 9.4, server major version 9.5.
         Some psql features might not work.
Type "help" for help.

postgres=#

Continuing with example commands:

dpg wale-backup-list my-first-cluster
dpg delete my-first-cluster

NOTE: my-first-cluster will be a long UUID/GUID for values provided by Cloud Foundry; and a value prefixed by T- when created by the sanity-test errand (which can be deleted if you see them).

v0.10.0

New feature - “clone from”

You can now create a new service instance cloned from the latest backups of an existing service instance or previously existing (deleted) service instance from the same org/space.

cf create-service dingo-postgresql cluster prod
cf create-service dingo-postgresql cluster staging -c '{"clone-from":"prod"}'

The clone-from feature also allows developers/admins to confirm that backups are working (edit prod, create a clone-from and confirm that the edit exists in the clone).

Note: clone-from only works for service instances created since this feature was implemented.

The new integration-test errand was added to test this feature with your Pivotal Cloud Foundry.

Upgrade requirements

After uploading the tile, please check the two “Archives” settings panes. The “Region” field might have been blanked out and require you to re-set the value of the S3 bucket region.

v0-10-0-s3-region-required

If you forget this step, then the sanity-test errand will exit with an error similar to:

check-object-store> ERROR: require default.region or default.endpoint

v0.9.2

Accidentally making a mistake with your AWS credentials or Amazon S3 buckets/regions no longer takes you into the pit of operations dispair. Instead, the sanity-test errand will immediately sanity test that your credentials work, that each bucket exists and that each bucket is in the region that you said it was in.

Fail fast FTW.

We’ve included some additional improvements for draining cells when shutting them down/restarting/upgrading.

v0.9.1

Dingo PostgreSQL now supports Ops Manager v1.7, in addition to continuing to support v1.6.

Additionally, we are now using the latest, most secure stemcell published by the boffins at Pivotal.

v0.9

This version was created for the platform administrators, who we love dearly.

Upgrade requirements

You will be required to upgrade the number of cells (see “Resource Config”) to 2 each (“Cell - Availability Zone 1” and “Cell - Availability Zone 2”). See below for discussion.

Notes

This edition includes a rewrite of the scheduling planner for cluster changes (new clusters and updated clusters). It both fixes a bug in placing new nodes in different availability zones, and implements “moving” nodes to different cells.

Moving nodes is an exciting new feature for Dingo PostgreSQL - which isn’t offered by any other Pivotal Network tile as far as we are aware. Administrators can now “move” the nodes in a cluster to different cells. They can even move the entire cluster, including the current leader/master.

In reality the nodes aren’t “moved”. Rather, the cluster is expanded, replicated, and collapsed. For each node that must be “moved”, a new node is added (the cluster expands) and the contents of the leader are replicated to the new nodes. Then, the nodes being moved are deleted. If this includes the leader, then a new leader is elected and the routing mesh is updated.

An administrator can perform this function using the built-in cf update-service command, passing cells parameters:

cf target -o their-org -s their-space
cf update-service their-db -c '{"cells": ["10.244.22.2", "10.244.21.7"]}'

In the example above, assume that their-db is a two-node cluster. If both of the cells specified are different from the cells currently being used by the nodes then the cluster will grow from two nodes to four nodes, then the original two nodes will be deleted. One of the two new nodes will become the new leader and the routing mesh will be updated.

The minimum footprint for cells is now four - two each in two different availability zones. This is to allow the sanity-test errand to test that it can move a cluster’s nodes to two different cells.

Where do the ["10.244.22.2", "10.244.21.7"] values for cells come from?

We now introduce some administrator-only API endpoints provided by the service broker itself (where as the cf update-service command operates via the Cloud Foundry API).

This release provides the following broker API endpoints:

  • GET /admin/cells
  • GET /admin/service_instances/UUID

The former GET /admin/cells can be used to get a list of all available cells and determine which availability zones they are in. Currently the cf update-service -c '{"cells": [UUIDs]}' command uses the UUID values from /admin/cells endpoint.

The latter GET /admin/service_instances/ID can be used to determine which cells are currently being used by any given service instance. The UUID is the Cloud Foundry UUID for the service instance.

One way to determine the UUID for a service instance is:

cf service their-db --uuid

Alternately, you can use the cf curl /v2/service_instances or similar commands to discover service instances and their UUIDs.

v0.8

First version made available on Pivotal Network.