We currently have some issues with two of the hypervisor nodes in the NORTH-1 region and the instances running on these nodes are currently unavailable.
We are working to resolve this issue.
News
Serious vulnerability in sudo (CVE-2021-3156)
Make sure to install the latest security updates in your instances to fix a Serious vulnerability in sudo (CVE-2021-3156) that will let any user run any command as root without entering a password.
In combination with other less severe security exploits this can in some cases be used to compromise your instances remotely.
Read more about it: https://www.openwall.com/lists/oss-security/2021/01/26/3
NORTH-1 will replace the HPC2N-region.
The HP2N-region will be removed 8/2 so make sure that all instances and data are moved to either EAST-1 or WEST-1 before 7/2.
If you need assistance please send a support ticket and let us know.
We will be replacing both storage and compute and since the setup in the HPC2N region was from the pilot cloud from 2015 we unfortunately cloud not do an in place upgrade.
The new NORTH-1 region will soon be available. I will use AMD 2.5Ghz CPU:s and the boot-disks will now use flash storage.
From Pilot to Production
As SNIC Science Cloud has gone from a pilot to a production resource, the pilot regions in the cloud will be replaced new regions with production hardware.
The region at C3SE has already been replaced by the new WEST-1 region; running OpenStack Rocky on new hardware.
The other pilot cloud-regions at UPPMAX and HPC2N will soon be replaced with the EAST-1 and NORTH-1 regions.
If you are starting up new projects in the cloud we suggest that you use the WEST-1 region for now until the other regions becomes available, because otherwise you will have to migrate your workload to the new regions soon.
The compute and storage of HPC2N region are temporarily down
Update: The downtime will last until 22/4, exact time unknown.
The electrical work did not go as smooth as planed, resulting in a cooling outage of the compute nodes and storage in the HPC2N region.
Maintenance with downtime in the HPC2N region.
Planned downtime in the HPC2N region on Monday the 20th of April between 6-12 and Tuesday the 21th of April between 11-17, due to urgent electrical work. All running instances will be suspended before the outage and restarted again afterwards.
The other regions will not be affected by this and so if you can, we suggest move your workloads to the new WEST-1 region that is running a much more resent version of OpenStack on new hardware.
UPPMAX region unavailable
Due to a broken network fiber (2020-01-30) the regions is currently unavailable, ETA for the repair is 20:00 UTC 2020-01-30.
Outages in the HPC2N-region during the holy days
Due to cooling issues on the 2nd and 7th of January there where short outages in the HPC2N region and all the running instances where shut down unexpectedly. The underlying issue causing these cooling issues has been resolved but you might need to start up your instances in the cloud again manually.
The UPPMAX region is temporarily down
Due to a datacenter cooling failure in the morning of Thursday the 12th of December, we were forced to do an emergency shutdown of the UPPMAX region. We are currently working on resolving this issue and apologize for the inconvenience of this event.
New hardware in the C3SE region
Final acceptance testing is currently ongoing, more information about the new hardware can be found here https://www.c3se.chalmers.se/about/SSC/ . The upgrade also updates Openstack to the Rocky release, more information regarding Rocky can be found here https://www.openstack.org/software/rocky/