VSM v1.1 engineering build is just released

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

VSM v1.1 engineering build is just released

ywang19
Administrator
You can get the binary package at https://github.com/01org/virtual-storage-manager/releases/download/v1.1/vsm-1.1-20150509.tar.gz .

Below is the release note:

=================

Special Notes

    starting from v1.1, the vsm dependencies are maintained on vsm-dependencies repository for tracking and trouble shooting.
    starting from v1.1, an automatic deployment tool is provided to deploy whole vsm controller and agents from one placement.

New Features

    VSM-156 add sanity check tool to help identify potential issues before or after deployment
    VSM-159 add issue reporting tool
    VSM-184 add automated script to help deploy VSM on multiple nodes
    VSM-242 Allow user to modify ceph.conf outside VSM

Resolved bugs

    VSM-4 Average Response Time" missing in dashboard Overview panel "VSM Status" section.
    VSM-15 VSM-backup prompt info not correct
    VSM-25 VSM Dashboard | Capacity of hard drives is wrong and percentage used capacity is not correct.
    VSM-26 [CDEK-1664] VSM | Not possible to replace node if ceph contain only 3 nodes.
    VSM-29 vsm-agent process causes high i/o on os disk
    VSM-33 negative update time in RBD list
    VSM-51 Install Fails for VSM 0.8.0 Engineering Build Release
    VSM-113 [CDEK-1835] VSM | /var/log/httpd/error_log - constantly ongoing messages [error]
    VSM-121 Storage node unable to connect to controller although network is OK and all setting correct
    VSM-123 Storage node will not be able to contact controller node to install if http proxy set
    VSM-124 [CDEK-1852] VSM | adding possibility to manipulate ceph values in cluster.manifest file
    VSM-166 cluster_manifest sanity check program gives incorrect advice for auth_keys
    VSM-168 [CDEK1800] VSM_CLI | remove mds - doesn't update vsm database
    VSM-171 [CDEK1672] VSM_CLI | list shows Admin network in Public IP section
    VSM-176 SSL certificate password is stored in a plain text file
    VSM-177 wrong /etc/fstab entry for osd device mount point
    VSM-179 keep ceph.conf up to date when executing "remove server" operations.
    VSM-193 hard-coded cluster id
    VSM-207 can't assume eth0 device name
    VSM-216 Add storage group requires at least 3 nodes
    VSM-217 Problem with replication size on "pool" created on newly added storage group.
    VSM-224 Controller node error in /var/log/httpd/error_log - constantly ongoing messages [error]
    VSM-230 when presenting pool to openstack, cache tiering pools should be listed.
    VSM-233 console blocks when running automatic installation procedure
    VSM-236 no way to check manifest correctness after editing them
    VSM-239 with automatic deployment, the execution is blocked at asking if start mysql service
    VSM-244 Internal server error when installing v1.1

Known issues



===============


Regards,
-yaguang
Reply | Threaded
Open this post in threaded view
|

Re: VSM v1.1 engineering build is just released

ywang19
Administrator
 A bugfix release v1.1_1 is just released at https://github.com/01org/virtual-storage-manager/releases/tag/v1.1_1, here is the release notes:


Special Notes

    this is a bugfix release for 1.1, which fixed the issues found in VSM-242 ("Allow user to modify ceph.conf outside VSM"). Also adding an new script uninstall.sh to help uninstall VSM in the case user expects to restart installation again.

New Features

    VSM-209 support multiple subnets

Resolved bugs

    VSM-260 the check_network in server_manifest will be wrong when it has a single network card

Known issues


Best regards,
-yaguang
Reply | Threaded
Open this post in threaded view
|

Re: VSM v1.1 engineering build is just released

ywang19
Administrator
Two features need emphasize in this release:
i) to support update ceph.conf even after ceph cluster is deployed, which should be useful if people want to modify the parameters like heartbeat intervals inside ceph.conf, but still expect VSM to manage the cluster. the command line is as following:
     > import_ceph_conf <cluster name> <new ceph.conf file>
     > DON'T use "/etc/ceph/ceph.conf" for the second argument as it will cause access conflictions.

ii) to support multiple subnets for the definition of management_addr/ceph_public_addr/ceph_cluster_addr in cluster.manifest, which should be useful in production environment where the three are in separate subnets.