Monday, 11 May 2015

New home for the blog

This blog has been moved to the new developer portal for Dimension Data:
https://developer.dimensiondata.com/blog/DEV

Friday, 24 April 2015

Using the MCP 2.0 commands for PowerShell

To back the launch of the NA9 MCP2.0 functionality, we have extended the PowerShell commands to have MCP 2.0 commands.
New-CaasNetworkDomain
New-CaasVlan
Get-CaasVlan
Get-CaasNetworkDomain
New-CaasServerOnNetworkDomain


https://community.opsourcecloud.net/Browse.jsp?id=5b71bb9ed81101d3222a0182b91cb986

To provision a Network Domain.

PS C:\> New-CaasNetworkDomain -Location "NA9" -Name "My test domain" -Description "the test domain" -Type Essentials


operation    : DEPLOY_NETWORK_DOMAIN
responseCode : IN_PROGRESS
message      : Request to deploy Network Domain 'My test domain' has been accepted and is being processed.
info         : {networkDomainId}
warning      :
error        :

requestId    : na/2015-04-25T01:25:21.614-04:00/ae7f0d6b-e522-425f-b744-2628885bdff8


To get the network domains in your MCP 2.0 datacenters

PS C:\> Get-CaasNetworkDomain


name            : My test domain
description     : the test domain
type            : ESSENTIALS
snatIpv4Address : 168.128.2.48
createTime      : 4/25/2015 5:25:21 AM
state           : NORMAL
progress        :
id              : 0481c6ef-d5e5-454f-a45d-096d8b59b104
datacenterId    : NA9

You can create a VLAN in your MCP 2.0 datacenters.

PS C:\> New-CaasVlan -NetworkDomainId "0481c6ef-d5e5-454f-a45d-096d8b59b104" -Name "Test VLAN" -Description "My first VLAN" -PrivateIpv4Ba
seAddress "192.168.100.0"


operation    : DEPLOY_VLAN
responseCode : IN_PROGRESS
message      : Request to deploy VLAN 'Test VLAN' has been accepted and is being processed.
info         : {vlanId}
warning      :
error        :
requestId    : na/2015-04-25T01:26:52.721-04:00/b99ddd00-8dfd-4d00-9bf5-99e235efe4af

There is a new command to see the VLANs provisioned in your MCP 2.0 datacenters.

PS C:\>Get-CaasVlan


networkDomain      : DD.CBU.Compute.Api.Contracts.Network20.VlanTypeNetworkDomain
name               : Test VLAN
description        : My first VLAN
privateIpv4Range   : DD.CBU.Compute.Api.Contracts.Network20.IpRangeCidrType
ipv4GatewayAddress : 192.168.100.1
ipv6Range          : DD.CBU.Compute.Api.Contracts.Network20.IpRangeCidrType
ipv6GatewayAddress : 2607:f480:111:1167:0:0:0:1
createTime         : 4/25/2015 5:26:52 AM
state              : PENDING_ADD
progress           : DD.CBU.Compute.Api.Contracts.Network20.ProgressType
id                 : a19c37d6-1a35-4313-810b-f0c59de73000
datacenterId       : NA9

And of course, your existing functionality will still work.

Monday, 20 April 2015

Using Apache LibCloud with the Dimension Data API

Apache have a project called 'libcloud', which is a multi-cloud library for python. Dimension Data Cloud (formerly known as OpSource cloud) has been a supported platform for many years.

Here is how you download and use libcloud with our API.

Install libcloud using PIP
>pip install apache-libcloud

Open up the Python shell
>python

import the libcloud project
>>>from pprint import pprint
>>>from libcloud.compute.types import Provider
>>>from libcloud.compute.providers import get_driver

Set the driver as OpSource (Dimension Data Cloud)
>>>cls = get_driver(Provider.OPSOURCE)

Turn off SSL validation for testing, or read https://libcloud.readthedocs.org/en/latest/other/ssl-certificate-validation.html
>>>import libcloud.security
>>>libcloud.security.VERIFY_SSL_CERT = False

Set your credentials
>>>driver = cls('myusername','mypassword')

List the servers in your account.
>>> pprint(driver.list_nodes())

[<Node: uuid=8fa6750b7829b224c6b1f252decfc80a61fae424, name=ExchN1, state=TERMINATED, public_ips=[], private_ips=10.172.
132.12, provider=Opsource ...>,
 <Node: uuid=db9505bf21d0811fc99d70c9a9ac72e0695a573d, name=ExchN2, state=TERMINATED, public_ips=[], private_ips=10.172.
132.14, provider=Opsource ...>,

Note: LibCloud only supports the NA region at present.

Further details are available on the libcloud website..
https://libcloud.readthedocs.org/en/latest/compute/index.html


Thursday, 29 January 2015

Exporting OVA files from AWS EC2 and Importing images into CaaS via PowerShell

For this post I will be demonstrating how to leverage the AWS API to export images that you have added to EC2 and export them as OVA files.

Once you have exported these OVA files you can then upload them to the CaaS Portal via some new commands in the PowerShell module version 1.3.1

Firstly, you will of course need an EC2 account and an instance to test on.

You will also need to install the AWS PowerShell module: http://aws.amazon.com/powershell/

We have a new example script for demonstrating this process. Export Images from AWS to CaaS.ps1

# Requirements - install PowerShell for AWS
# http://aws.amazon.com/powershell/

## Replace these with your target image details..

$region = "ap-southeast-2"
$transferBucket = "export-examples-dd" # Note you need to add
$instanceToMove = "i-fgj7kblz" # Name of the Amazon instance to migrate.
$targetNetworkName = "EXAMPLE-WEB" # Name of the network you want to migrate to
$targetLocation = "AU1" # CaaS location to migrate to.
$targetImageName = "Example-WEB" # Name of the virtual machine you want to create

# Import the AWS PowerShell Commands.
Import-Module AWSPowerShell


The next step is to import a set of utility commands (shortcuts for EC2 exports) in a file called AWS_Utilities.

# Import the AWS Utilities for CaaS
. .\AWS_Utilities.ps1 

In Amazon, you can create API Keys you will need to update those in the script with your own


$akid = "AKIAIFY64H7FAB2L712D" # Replace with your API Key ID
$aksec = "wLmkFh3Ow177Moy1asdu2kcoyc3jCWSs7wWA" # Key Secret from the UI 

Once you have established the keys, you need to choose your instance ID.


Now create the API connection to AWS

# Setup AWS API
Set-DefaultAWSRegion -Region $region
Set-AWSCredentials -AccessKey $akid -SecretKey $aksec -StoreAs "export"


Then you can export your image to S3


$exportJob = ExportImageFromAWS -region $region -bucketName $transferBucket -instanceId $instanceToMove
DownloadAWSExport -region $region -bucketName $transferBucket -exportId $exportJob


For my 8GB image, this took about 2 hours to complete. You will see the OVA file in your local folder, with the disk and configuration inside.

Once you have the OVA you can upload and import it into CaaS using the New-CaaSUploadCustomerImage command.
Normally you would have to use an FTP client, but we have built one into the command, so it uploads the files for you.

New-CaaSConnection -Region Australia_AU -Vendor DimensionData -ApiCredentials (Get-Credential)
# Upload our virtual appliance.
New-CaasUploadCustomerImage -VirtualAppliance "$exportJob.ova"




Then finally, import that image into a new template (Customer Image)


# import OVF into CaaS.
$package = Get-CaasOvfPackages | Where { $_.name -eq "$exportJob.mf" }
$network = Get-CaasNetworks -Name $targetNetworkName -Location $targetLocation


If you login to the Portal you will see the image, but you can of course do this via PowerShell. 


New-CaasImportCustomerImage -CustomerImageName $targetImageName -OvfPackage $package -Network $network -Description "Imported image from AWS - $instanceToMove"


A video demonstration of this process is available here:

Thursday, 15 January 2015

Ruby SDK for CaaS

Getting started with Dimension Data Cloud SDK (Ruby)


In addition to the .NET client library, a Ruby alternative has also been developed. It can be used to automate various tasks, such as setting up and tearing down different environments on the cloud platform.
The SDK is available as a ruby gem. The code itself can be found on github. In this post, we’ll set up an Ubuntu server and connect it to the internet so we can reach it via ssh.
Before we can start, we need to have both ruby and the SDK on our system. If you still need to install ruby, you can find instructions here. Installing the SDK itself is easy via gem:
user@localhost:~$ gem install didata_cloud_sdk
Fetching: didata_cloud_sdk-0.3.2.gem (100%)
Successfully installed didata_cloud_sdk-0.3.2
Parsing documentation for didata_cloud_sdk-0.3.2
Installing ri documentation for didata_cloud_sdk-0.3.2
Done installing documentation for didata_cloud_sdk after 0 seconds
1 gem installed

Now that the SDK is installed, we can start building our environment. You’ll see that it’s quite straightforward, we can even do it right in the console. Let’s fire up irb (the interactive ruby shell) :
user@localhost:~$ irb
2.0.0-p247 :001 > require "ddcloud"
 => true
2.0.0-p247 :002 > user = "username"
 => "username"
2.0.0-p247 :003 > pass = "password"
 => "password"
2.0.0-p247 :004 > url = "https://api-eu.dimensiondata.com/oec/0.9"
 => "https://api-eu.dimensiondata.com/oec/0.9"
2.0.0-p247 :005 > org_id = "xxxxxxxxxx"
 => "xxxxxxxxxx"
We first load the SDK in the session and define a couple of variables which hold our login information. I’m logging in via the European portal, so my url starts with « api-eu ».  Different possibilities are :
·        North America : api-na
·        Australia : api-au
·        Africa : api-mea
·        Asia Pacific : api-ap
The org_id is a unique string which specifies your organization. You’ll find it at the top of your account page in the cloud web portal.
With this information, we can create a client to communicate with the cloud API:
2.0.0-p247 :006 > c = DDcloud::Client.new url, org_id, user, pass
 => #<DDcloud::Client:0x00000002afe068 @api_base="https://api-eu.dimensiondata.com/oec/0.9", @org_id="xxxxxxxxxxx", @username="username", @password="password", @datacenter="EU1", @default_password="verysecurepassword", @colors=true, @silent=false>
The client itself is not yet connected to the API, but will use the credentials you provided to execute requests on your behalf.
To create a new machine and connect it to the internet, we need to complete 3 steps:
1)      Create a network
2)      Create and start the cloud server
3)      Connect it to the internet
We’ll go right ahead and create a network:
2.0.0-p247 :007 > c.network.create "EU-test-network", "This is a test network"
=> #<Hashie::Mash operation="Add Network" result="SUCCESS" result_code="REASON_0" result_detail="Network created successfully (Network ID: 58dce2ee-d18f-11e1-b508-180373fb6591)">
2.0.0-p247 :008 > c.network.show_by_name "EU-test-network"
=> #<Hashie::Mash description="This is a test network" id="58dce2ee-d18f-11e1-b508-180373fb6591" location="EU1" multicast="false" name="EU-test-network" private_net="10.240.215.0">
When creating a network, we need to specify a name and optionally a description. The network is created before the function returns, so the request can take some time. You can check the status of the network by calling the « show_by_name » method.
We’ll now create the cloud server and place it in the network. We again need to provide a name and a description, as well as a network id and an image id. The network id can be extracted from the « show_by_name » call. The image id specifies which operating system is preinstalled on your new cloud server. Dimension Data provides a rather large list of operating systems which you can find in the cloud portal. In this case, we’re interested in an Ubuntu server. The id for this image is: d4ed6d40-e2f0-11e2-84e5-180373fb68df
Putting this all together:
2.0.0-p247 :009 > network = c.network.show_by_name "EU-test-network"
=> #<Hashie::Mash description="This is a test network" id="58dce2ee-d18f-11e1-b508-180373fb6591" location="EU1" multicast="false" name="EU-test-network" private_net="10.240.215.0">
2.0.0-p247 :010 > network_id = network[:id]
 => "58dce2ee-d18f-11e1-b508-180373fb6591"
2.0.0-p247 :011 > image_id = "d4ed6d40-e2f0-11e2-84e5-180373fb68df"
 => "d4ed6d40-e2f0-11e2-84e5-180373fb68df"
2.0.0-p247 :012 > c.server.create "testserver", "this is a testserver", network_id, image_id
=> #<Hashie::Mash operation="Deploy Server" result="SUCCESS" result_code="REASON_0" result_detail="Server \"Deploy\" issued">

The server creation is not instantaneous, we can follow along as follows:
2.0.0-p247 :021 > require "pp"
2.0.0-p247 :022 > pp c.server.show_by_name("testserver")
{"id"=>"41c17cab-e331-4ac4-9732-91ac65b5c8c9",
 "location"=>"EU1",
 "name"=>"testserver",
 "description"=>"this is a testserver",
 "operating_system"=>
  {"id"=>"UBUNTU1264", "display_name"=>"UBUNTU12/64", "type"=>"UNIX"},
 "cpu_count"=>"2",
 "memory_mb"=>"4096",
 "disk"=>
  {"id"=>"db05fa4e-8876-4df6-a55b-7cad16557083",
   "scsi_id"=>"0",
   "size_gb"=>"10",
   "speed"=>"STANDARD",
   "state"=>"NORMAL"},
 "source_image_id"=>"d4ed6d40-e2f0-11e2-84e5-180373fb68df",
 "network_id"=>"58dce2ee-d18f-11e1-b508-180373fb6591",
 "machine_name"=>"10-240-215-11",
 "private_ip"=>"10.240.215.11",
 "created"=>"2014-12-04T13:58:21.000Z",
 "is_deployed"=>"false",
 "is_started"=>"true",
 "state"=>"PENDING_ADD",
 "status"=>
  {"action"=>"DEPLOY_SERVER",
   "request_time"=>"2014-12-04T13:58:21.000Z",
   "user_name"=>"username",
   "number_of_steps"=>"13",
   "update_time"=>"2014-12-04T14:03:00.000Z",
   "step"=>{"name"=>"WAIT_FOR_GUEST_OS_CUSTOMIZATION", "number"=>"10"}}}
The server is correctly deployed and started when both the « is_deployed » and « is_started » variables are true.
Finally, we need to connect the network to the internet and open port 22 to allow ssh access. For this, we’ll have to create an aclrule (to open port 22) and create a natrule (to map our internal address to an external ip). Let’s go:
2.0.0-p247 :025 >   c.network.natrule_create network_id, "map_testserver", "10.240.215.11"
=> #<Hashie::Mash id="038d6d6e-d4d0-4245-a1fc-7f60c39acf57" name="map_testserver" nat_ip="164.177.187.50" source_ip="10.240.215.11">
We have to provide the network id, give the rule a name and provide the internal ip of our server. In this case, we extracted the ip from the server « show_by_name » method. Notice that the API returns the public ip in the « nat_ip » variable. Let’s note this ip, we’ll need it to access our server later.
Creating the aclrule is a bit more involved :
2.0.0-p247 :027 > c.network.aclrule_create network_id, "port-22-allow", 105, true, "tcp", "22", true
=> #<Hashie::Mash action="PERMIT" destination_ip_range=#<Hashie::Mash> id="bb40e393-02ef-4a1f-bff4-98de7cba6f67" name="port-22-allow" port_range=#<Hashie::Mash port1="22" type="EQUAL_TO"> position="105" protocol="TCP" source_ip_range=#<Hashie::Mash> status="NORMAL" type="OUTSIDE_ACL">
We again provide the network id and a name. We also have to provide a position (105, which in this case will be the last rule before « drop any any »). The next argument specifies that the rule is inbound. We also specify the protocol (tcp) and the port (22). Finally, we specify that the rule is an « Allow » rule (true).
Our cloud server is now available on the public ip we noted before. Let’s try to access it via ssh. By default, the password is « verysecurepassword », which is actually not that secure. It is recommended to change this as soon as possible .
user@localhost:~$ ssh root@164.177.187.50
The authenticity of host '164.177.187.50 (164.177.187.50)' can't be established.
ECDSA key fingerprint is db:e6:c9:32:1d:99:6c:f0:23:ed:a3:cb:d7:11:ee:ea.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '164.177.187.50' (ECDSA) to the list of known hosts.
root@164.177.187.50's password:
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-23-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Thu Dec  4 15:23:22 CET 2014

  System load:  0.0               Processes:           77
  Usage of /:   17.2% of 8.02GB   Users logged in:     0
  Memory usage: 1%                IP address for eth0: 10.240.215.11
  Swap usage:   0%

  Graph this data and manage this system at https://landscape.canonical.com/

root@10-240-215-11:~#

There you have it, you’ve installed your first cloud server via the API!

Wednesday, 14 January 2015

Parallel and background server operations with PowerShell jobs


One of the requests that I get internally is the ability to create CaaS servers on the background.
Imagine the scenario:

We need to create 4 CaaS servers, set 4 vCPUs and 8GB RAM, add an additional disk, create a VIP (virtual interface) and load balance the requests on port 443 on these servers.

One approach for this would be invoke each server cmdlet (New-CaasServer, Set-CaasServer, Set-CaasServerDiskSpeed) and use the Get-CaasDeployedServer to check the status and state of each server before calling the next command.

Another is to use the -PassThru and Out-CaasWaitForOperation on the server cmdlets so you can execute a sequence of commands against the same server as described on the batch operation blog post. But you would need to wait each server to be created one-by-one.

PowerShell provides native support for background jobs from version 3 up. These jobs run on a separate runspace (or thread) therefore all the variables and objects that are required to run the operation will have to be available at the thread level.
So, combining the batch operations and PowerShell jobs will provide a very efficient automation approach, saving time when creating CaaS servers.

The script below will create 4 CaaS server at the same time.

#import Caas Module
Import-Module CaaS

#capture the Caas credentials and create a new Caas conneciton
$cred = Get-Credential
$networkname = "Network1"


#create a simple function to create a Caas Server passing some parameters from the host script
Function CreateVMBackground
{
param(
[pscredential]$login,
[string]$servername,
[string]$networkname

)

#create a script block that will be executed on the background
$scriptblock = { `

    $osimagename = "Win2012 DC 64-bit 2 CPU"
    $administratorPassword= "password123"
    $serverRAM = 8192
    $serverCPUCount = 4

    #create connection for the background job
    New-CaasConnection -Name "AU" -ApiCredentials $args[0] -Vendor DimensionData -Region Australia_AU


    #Get the network with a specific name
    $network=Get-CaasNetworks -Name $args[2]

    #Get a OS image with a specific name
    $os=Get-CaasOsImages -Network $network -Name $osimagename

    #create a new server details configuration
    $serverdetails = New-CaasServerDetails -Name $args[1] -AdminPassword $administratorPassword -IsStarted $false -OsServerImage $os -Network $network

    # create new server, change memory to 8GB and 4 vCPU, start the server
     New-CaasServer -ServerDetails $serverdetails -PassThru | Out-CaasWaitForOperation | Set-CaasServer -CPUCount $serverCPUCount -MemoryInMB $serverRAM -PassThru | Out-CaasWaitForOperation | Set-CaasServerState -Action PowerOn

 }


 $jobname = "CreateServer-{0}" -f $servername
 #start a job executing the scriptblock and passing the variables from the ArgumentList to the $args parameter
 Start-Job -Name $jobname -ScriptBlock $scriptblock -ArgumentList $login,$servername,$networkname

}


#create VMTest1
CreateVMBackground -login $cred -servername "VMTest1" -networkname $networkname

Start-Sleep 3
#create VMTest2
CreateVMBackground -login $cred -servername "VMTest2" -networkname $networkname

Start-Sleep 3
#create VMTest3
CreateVMBackground -login $cred -servername "VMTest3" -networkname $networkname

Start-Sleep 3
#create VMTest4
CreateVMBackground -login $cred -servername "VMTest4" -networkname $networkname

#Get any message generated by the jobs
#Get-Job | Where-Object {$_.Name -like 'CreateServer-*'} | Receive-Job

Write-Host -ForegroundColor Yellow "Waiting the jobs to finish..."
#wait the jobs to complete before continue the script
Get-Job | Where-Object {$_.Name -like 'CreateServer-*'} | Wait-Job

Write-Host -ForegroundColor Yellow "All Jobs completed"


#remove completed jobs
Get-Job | Where-Object {$_.Name -like 'CreateServer-*'} | Remove-Job




The remaining of the script below will create and configure the network components to create the VIP and load balance the requests



#create connection
New-CaasConnection -Name "AU" -ApiCredentials $login -Vendor DimensionData -Region Australia_AU

#get the network
$network=Get-CaasNetworks -Name $networkname

#get each server into a variable
$server1 = Get-CaasDeployedServer -Name "VMTest1"
$server2 = Get-CaasDeployedServer -Name "VMTest2"
$server3 = Get-CaasDeployedServer -Name "VMTest3"
$server4 = Get-CaasDeployedServer -Name "VMTest4"

#create antiaffinity rules

New-CaasServerAntiAffinityRule -Server1 $server1 -Server2 $server2
New-CaasServerAntiAffinityRule -Server1 $server3 -Server2 $server4

#create 4 real servers
$rserver1=New-CaasRealServer -Network $network -Server $server1 -Name $server1.name -InService $true -PassThru
$rserver2=New-CaasRealServer -Network $network -Server $server2 -Name $server2.name -InService $true -PassThru
$rserver3=New-CaasRealServer -Network $network -Server $server3 -Name $server3.name -InService $true -PassThru
$rserver4=New-CaasRealServer -Network $network -Server $server4 -Name $server4.name -InService $true -PassThru

#create a new probe (optional)
#$probe = New-CaasProbe -Network $network -Name "test" -Type TCP -Port 555
#use a existing probe
$probe = Get-CaasProbe -Network $network -Name "tcp"

$serverfarm = New-CaasServerFarm -Network $network -Name "VMTestGroup" -Predictor LEAST_CONNECTIONS -RealServer $rserver1 -Probe $probe -PassThru
#add the second server to the server farm
Add-CaasToServerFarm -Network $network -ServerFarm $serverfarm -RealServer $rserver2
Add-CaasToServerFarm -Network $network -ServerFarm $serverfarm -RealServer $rserver3
Add-CaasToServerFarm -Network $network -ServerFarm $serverfarm -RealServer $rserver4

#add a probe to the server farm
#Add-CaasToServerFarm -Network $network -ServerFarm $serverfarm -Probe $probe

#create new persistence profile
$persprofile=New-CaasPersistenceProfile -Network $network -Name "PersProfileVMTest" -ServerFarm $serverfarm -TimeoutMinutes 30 -CookieName "VMCookie" -CookieType COOKIE_INSERT -PassThru

#create vip
New-CaasVip -Network $network -Name "VMTestVip" -PersistenceProfile $persprofile -Port 443 -Protocol TCP -InService $true -ReplyToIcmp $true


#get Vip created
$vip = Get-CaasVip -Network $network -Name "VMTestVip"

#create an IP Address with vip ipaddress
$vipIpAddress =[IPAddress]$vip.ipAddress

#create ACL rule to permit any connection to the VIP ip address on port 443
New-CaasAclRule -Network $network -AclRuleName "VMTestVip443" -Position 321 -Action PERMIT -Protocol TCP -DestinationIpAddress $vipIpAddress -PortRangeType EQUAL_TO -Port1 443


If you have any feedback please let us know!



Tuesday, 13 January 2015

PowerShell for CaaS 1.3 - Multiple connections and Account management

After the holidays break, we are back with more PowerShell for Dimension Data CaaS (Cloud).  More functionality was added to enhance automation across the board.

We now have more than 90 PowerShell cmdlets interacting with Dimension Data REST API. Please check the previous post for more examples.

Reminder: You can download the latest release and source code on GitHub.

1) Support for multiple connections

Now you can connect to multiple accounts on the same script. This will be very useful if you use multiple datacentres. Therefore we added the Name parameter when you create a connection. You just have to change the active connection to switch. 


$connectionDev=New-CaasConnection -Name "AustraliaConnection" -ApiCredentials (Get-Credential -Message "Dev Connection") -Vendor DimensionData -Region Australia_AU

$connectionDev2=New-CaasConnection -Name "APACConnection" -ApiCredentials (Get-Credential -Message "Dev2 Connection") -Vendor DimensionData -Region AsiaPacific_AP

#list deployed servers on the active connection (AustraliaConnection)
Write-Host "Get Deployed Servers from connection AustraliaConnection"
Get-CaasDeployedServer

#set the active connection to APACConnection
Set-CaasActiveConnection -Name "APACConnection"


#list deployed servers on the active connection (APACConnection)
Write-Host "Get Deployed Servers from connection APACConnection"
Get-CaasDeployedServer


#use the connection parameter to force the cmdlet to use the AustraliaConnection
Get-CaasDeployedServer -Connection $connectionDev


#list all connections stored on the PS session
Get-CaasConnection

#Remove the APACConnection from PS Session
Remove-CaasConnection -Name "APACConnection"


2) Manage Accounts
 The primary administrator can create, update and delete accounts from his/her organisation. Using some the cmdlets from the example below.


#create a helper role object to set which roles the new user will be granted
$roles = New-CaasAccountRoles -Network $true -Backup $true -Server $true -Storage $true -CreateImage $true -Reports $true

#create a new sub-administrator account
New-CaasAccount -Username "testaccount_demo" -FullName "Test Account" -FirstName "Test" -LastName "Account" -Password (ConvertTo-SecureString -AsPlainText -String "password123" -Force) -EmailAddress "test_account@dimensiondata.com" -Roles $roles

#change the new account FullName and FirstName
Set-CaasAccount -Username "testaccount_demo" -FullName "TestAccount2" -FirstName "Test2"

#display all accounts
Get-CaasAccounts

#delete the account
Get-CaasAccounts -Username "testaccount_demo" | Remove-CaasAccount

Please let us know if you have any issues or questions using the PowerShell for Caas.