Cindori is the software company behind Disk Sensei, Trim Enabler, and the all-new VR Desktop. We craft amazing apps that enhance your digital life. Enhance your Mac experience.
The vSphere Integrated Containers appliance runs various services, such as vSphere Integrated Containers Management Portal, vSphere Integrated Containers Registry, the API for the vSphere Client plug-in, and the Web server for the appliance welcome page and vSphere Integrated Containers Engine download. The appliance has four virtual disks attached to it:
Macs Fan Control 1.5.8.1 is a free download for macOS 10.12 and up with earlier versions available for Mac OS X 10.6 to OS X 10.11. A $14.95 “ Pro ” version brings custom fan presets, plus “priority customer support and confidence in future updates and improvements.”. Disc Information. The Discmania Sensei is a low speed, neutral flying, all purpose putt and approach disc. It is part of Discmania's Active Line, and was formerly known as the 'Tiger Warrior'. Pricing; Out of Stock! Receive an email alert when the Sensei is back in in Stock.
Disk No. | Path | Node | Description |
---|---|---|---|
1 | / | SCSI(0:0) | The root disk, that contains the operating system and application state of the vSphere Integrated Containers appliance. |
2 | /storage/data/ | SCSI(0:1) | A data disk that contains, among other things, the vSphere Integrated Containers Registry instance that is running in the appliance. |
3 | /storage/db/ | SCSI(0:2) | A database disk that contains the MYSQL, Clair, and Notary databases for vSphere Integrated Containers Registry. |
4 | /storage/log/ | SCSI(0:3) | A logging disk that contains the logs for the different vSphere Integrated Containers components. |
The separation of different types of data between disks allows you to upgrade the appliance with an existing data disk from a previous installation. It also allows you to back up and restore the different disks separately, if necessary.
The recommended way to back up the appliance is to copy the base disks. You can then restore the appliance by attaching the cloned disks to a new instance of the appliance.
Copy the Base Disks
You can copy the base disks manually by copying the VMDK files in the vSphere Client.
![Disk sensei 1 5 13 Disk sensei 1 5 13](https://www.macbed.com/wp-content/uploads/2017/10/54010-1.png)
Procedure
- Right-click the appliance VM and elect Power > Shut Down Guest OS to shut down the appliance VM.IMPORTANT: Do not select Power Off.You must shut down the VM in order to quiesce the database before the backup. Also, if you use NFS datastores, you cannot copy disk files while the VM is powered on.
- Go to the Storage view of the vSphere Client and navigate to the datastore and datastore folder that contain the VM files for the version of the appliance that you want to back up.
- Use ctrl-click to select the following VMDK disk files from the old version of the appliance.
File to Select Description <appliance_name>.vmdk
Hard disk 1, root disk <appliance_name>_1.vmdk
Hard disk 2, data disk <appliance_name>_2.vmdk
Hard disk 3, database disk <appliance_name>_3.vmdk
Hard disk 4, log disk. Migrating logs is optional. - Click Copy to, select a target datastore folder in which to copy the backup disk files, and click OK.
Alternatively, you can use
vmkfstools
to clone the disks and manually copy the VM configurations. For information about using vmkfstools
, see Using vmkfstools in the vSphere documentation.Restoring Cloned Disks
To restore the appliance from cloned disks, deploy a new instance of the vSphere Integrated Containers appliance of the same version as the one you backed up. You then copy the cloned VMDK files into the new appliance datastore and attach them to the appropriate virtual device nodes on the new appliance VM.
IMPORTANT: After you deploy the new instance of the appliance, do not power it on. If you do power it on, power off without filling in the Complete VIC appliance installation panel, that registers the appliance with vCenter Server.
Procedure
- Right-click the new appliance VM and select Edit Settings.
- Remove the hard disks from the new appliance.
Disk to Remove Description Hard disk 1 Root disk Hard disk 2 Data disk Hard disk 3 Database disk Hard disk 4 Log disk. - Hover your pointer over each hard disk and click the Remove button on the right-hand side of the row.
- For each disk, select the Delete files from this datastore checkbox.
- When you have marked the disks for removal, click OK.
- Go to the Storage view of the vSphere Client and navigate to the datastore and datastore folder that contain the backup disk files that you copied from the old appliance.
- Select the appropriate VDMK files and click Copy to to copy the backup VMDK files to the datastore folder of the new appliance.
- Attach the backup VMDK files to the appropriate nodes on the new appliance.
VMDK File Virtual Device Node <appliance_name>.vmdk
SCSI(0:0) <appliance_name>_1.vmdk
SCSI(0:1) <appliance_name>_2.vmdk
SCSI(0:2) <appliance_name>_3.vmdk
SCSI(0:3) - In the Hosts and Clusters view, right-click the appliance and select Edit Settings.
- Select the option to add a new disk:
- HTML5 vSphere Client: Click the Add New Device button and select Existing Hard Disk.
- Flex-based vSphere Web Client: Click the New device drop-down menu, select Existing Hard Disk, and click Add.
- Navigate to the datastore folder for the appliance, select the backup version of the
<appliance_name>.vmdk
disk file, and click OK. - Expand New Hard Disk and make sure that the Virtual Device Node for the disk is set to SCSI(0:0).
- Repeat the procedure to attach
<appliance_name>_1.vmdk
to SCSI(0:1),<appliance_name>_2.vmdk
to SCSI(0:2), and<appliance_name>_3.vmdk
to SCSI(0:3). - Click OK.
- Power on the new appliance VM.
Take Snapshots of the Appliance VM
The appliance disks are not independent of the appliance VM, so if you take a snapshot of the appliance VM, it also takes snapshots of all of the disks.
You must shut down the appliance VM before you take the snapshot. Taking snapshots while the appliance is running can result in the appliance coming back up in an inconsistent state if you restore it from a snapshot.
IMPORTANT: It is not recommended to use snapshots as your main backup method. Use snapshots only for short-term, temporary backups. For more information see the best practices for using snapshots in VMware KB 1025279.
Procedure
Disk Sensei 1 5 128
- Right-click the appliance VM and elect Power > Shut Down Guest OS to shut down the appliance VM.IMPORTANT: Do not select Power Off.
- Take a snapshot of the appliance VM.
- Power on the appliance VM.
Summary
Requirements
Instructions
Changelog
Comments1426
Bugs3
HCIBench stands for 'Hyper-converged Infrastructure Benchmark'. It's essentially an automation wrapper around the popular and proven open source benchmark tools: Vdbench and Fio that make it easier to automate testing across a HCI cluster. HCIBench aims to simplify and accelerate customer POC performance testing in a consistent and controlled way. The tool fully automates the end-to-end process of deploying test VMs, coordinating workload runs, aggregating test results, performance analysis and collecting necessary data for troubleshooting purposes.
HCIBench is not only a benchmark tool designed for vSAN, but also could be used to evaluate the performance of all kinds of Hyper-Converged Infrastructure Storage in vSphere environment.
- Web Browser:
IE8+, Firefox or Chrome - vSphere 5.5 and later environments for both HCIBench and its client VMs deployment
Version 2.5.0 Update
- Added support vSAN HCI Mesh testing, now you can test both local and remote vSAN datastores at the same time
- Added support local storage including VMFS and vSAN-Direct testing
- Added vSAN Debug Mode, allow user to collect vm-support bundle and vmkstats automatically when running testing against vSAN
- Changed guest VMs name convention to {vm_prefix}-{datastore_id}-batch_num-sequence_num
- Enhanced testing report format
- Allow user to specify customized IP addresses for guest VMs
- Allow user to configure CPU and Memory for guest VMs
- Added best practice and network troubleshooting guide in the user manual
- Bug fixes
- MD5 Checksum: 817c2c788364f252e728d4253b3b96da HCIBench_2.5.0.ova
Version 2.4.0 Update
- Fixed tvm deployment bug when specifying host
- enabled easy run to support stretched cluster
- fixed timezone issue on pdf report, and added more vSAN info into PDF report
- set testname and testcase as variables in grafana
- added CPU workload into fio config page
- updated rbvmomi to support vsphere 7.0+
- enhanced fio and vdbench graphite dashboards
MD5 Checksum: 0cfd6cc852e33e5ce32022a66539b4c9 HCIBench_2.4.0.ova
Version 2.3.1 Update
- Fixed static IP setting issue
- Fixed reuse VMs on multi datastores issue
- Fixed vm/tvm deployment issue
- MD5 Checksum: 1b220f22575eacf62a965992a4c916e7 HCIBench_2.3.1.ova
Version 2.3.0 Update
- Upgraded to Photon 3
- Integrated vSAN performance monitoring
- Tuned disk preparation
- Added HCIBench test report
- Added DNS exception handler
- Upgraded fio to 3.16
- Bug fixes
- MD5 checksum: b43c29e146b8a7efa08028e7d6699a6e
- If you need to automate HCIBench, please look at:
https://code.vmware.com/samples?id=6502 for python2.7
https://code.vmware.com/samples?id=6588 for python3
Version 2.2.1 Update
- Fixed docker volume moving issue
- MD5 checksum of HCIBench_2.2.1.ova: 1a39c9df7d1485bc06332ae0b9d92ca7
Version 2.2 Update
- Moved docker volume to sdb to avoid blowing up OS disk
- Added Fio spreadsheet generator
- Added DRS warning checkup
- Enhanced Grafana to keep all the historical data
- Added DNS exception handler
- Fixed RAM and PCPU reporting issue
- Fixed Vdbench spreadsheet not reporting issue
- MD5 checksum of HCIBench_2.2.ova: bb2a77dcf2ecc23b1ec2c30aee9945ec
Version 2.1 Update
- Switched UI to dark theme
- Redesigned VMDK preparation methodology, which can complete much faster using RANDOM on deduped storage
- Added VMDK preparation process update
- Added Graphite port check into prevalidation
- Added vCenter/Host password obfuscation
- Added 'Delete Guest VM' button
- Fixed Grafana display issue
- Fixed FIO blank results issue
- Bug fixes
- MD5 checksum of HCIBench_2.1.ova: d37e6f164ed962a6e7ccbe104ba9eaec
Version 2.0 Update
Disk Sensei 1 5 11
- Added fio as an alternative workload generator
- Added Grafana for workload live monitoring
- Switched UI to clarity
- Allow user to select one to four cases while using easy-run
- Bug fixes
- MD5 checksum of HCIBench_2.0.ova: ba3c2b06b8c27fb41a1bb68baedb325f
Version 1.6.8.7 Update
- Enhanced easy-run, put original 4k,70% read as the first test case, then 4k, 100% read and 256k, 100% write
- Enhanced tvm deployment validation
- Added Checksum into easy-run consideration
- Updated guest VM template with increased ring_pages and disk scheduler
- Added DNS configuration guidance into welcome message
Version 1.6.8.5 Update
- Added 2 more test cases into easy-run, 4k 100% random read and 256k 100% sequential write
- Batch deployment will be involved if deploying more than 8 VMs to speed up deployment process
- Allow user to choose IP prefix when using static IP
- Optimized UI to allow user to review the results by single click
- Fixed regression issue when placing Datacenter/Cluster in the folder
Version 1.6.8.1 Update
- Fixed regression when datastore is in the datastore folder
- Avoid checking connection to host directly and use tvm deployment instead
- Added Vdbench version check in summary script
Version 1.6.8 Update
- Added resource pool and VM folder fields for VMC environment
- Fixed easy-run disk size issue
- Enhanced pre-validation error message handling
- Changed the names of network interface from 'Public Network' to 'Management Network', and 'Private Network' to 'VM Network'
Version 1.6.7.2 Update
- Enhanced write/read buffer/cache methodology
- Fixed network ip-prefix selection issue
- Fixed 95% percentile calculation issue
Version 1.6.7.1 Update
- Fixed vSAN Performance Diagnostic API call
- Fixed network validation message not clear issue
- Fixed setting re-use VMs as default bug in 1.6.7
Self control app for windows. Version 1.6.7 Update
- Enabled https instead of http
- Added storage policy field, user can specify storage policy for the data disks. For this version, storage policy can't be assigned to existing client VMs
- Enhanced deployment methodology
- Enhanced vSAN Observer to avoid blow up the memory
- Enhanced vSAN Performance Diagnostic API call with HCIBench workload configuration included
- Added timestamp to the testing status
- Bug fixes
Version 1.6.6. Update
- Spectre & Meldown patch on both HCIBench VM and Client VM
- Added client VM prefix field, allow running multiple HCIBench instances against single cluster
- Attach testing log along with testing results
- Enabled live vSAN Observer when running testing, using https://HCIBench_IP:8010
- Updated the drop read/write cache script
- Added more message info during the testing
- Bug fixes
Version 1.6.5.2 Update
- Added case comparisons by generating an XLS file for each test folder
- Fixed bug when there's white space in datastore name or test name
Version 1.6.5.1 Update
- Enhanced IP segment selection
- Set open file limit to 4096
- Updated vm-tools to the latest version
- Bug fixes
Version 1.6.5 Update
- Enhanced 95th percentile calculation.
- Added Curve and Multi Run calculation.
- Added SSH Service validation.
- Replaced DHCP Service with Static IP Service.
- Added IP conflict check.
- Fixed bunch of bugs.
- Change the default client VM RAM from 4GB to 8GB
Version Version 1.6.3 Update
- Enhanced vSANPerformanceDiagnose function call
- Enhanced port 443 validation
- Enhanced results calculation
- Added host maintenance mode validation
- Added deployment validation
Version 1.6.2 Update
- Integrated with vSAN Performance Diagnostic of vSphere_6.5U1/vSAN_6.6.1.
- Added DHCP Service validation.
- Added Vdbench workload profile validation.
- Removed the root password expiration policy.
- Changed results display to show full file names.
- Changed easy-run calculation from host basis to disk-group basis.
Disk Sensei 1 5 12
Version 1.6.1 Update
- Added network name uniqueness check
- Changed the 'disk warmup' to 'Virtual disk preparation' to avoid confusion
- Changed the pvscsi configuration, when there are more than 4 vmdks per pvscsi, more pvscsi controllers will be added and the vmdks will be evenly distributed
- Bug fixes
Version 1.6.0.0 Update
- Added Clear read/write cache option for vSAN.
- Added Easy Run feature for vSAN, Easy Run can help vSAN user to determine the VMs/VMDKs/size/Disk_Init_Methods and run testing automatically.
- Added Re-use VMs feature, user is able to re-use the existing client VMs for more tests.
- Added the 95th percentile of calculation into the results.
- Had special characters issue resolved and got other bug fixes.
Version 1.5.0.5 Update
- Increased Timeout value of client VM disk from 30 seconds to 180 seconds.
- Disabled client VM password expiration.
- Disabled client VM OS disk fsck.
- Set Observer interval to 60 seconds to shrink the size of observer data.
- Fixed PCPU calculation.
- Created link directory of /opt/automation/logs, user will be able to review the testing logs in http://HCIBENCH/hcibench_logs/
- Increased the RAM of HCIBench from 4GB to 8GB to avoid running out-of-resource issue.
Version 1.5.0.4 Update
Disk Sensei 1 5 13
- Added the checking if user saved the configuration
- Increased the stack size to 65536 due to 'ls too many arguments issue while process PCPU usage'
- Fixed the typo from 'Netowrk' -> 'Network' in the deployment page.
- Added vCenter hostname resolve checking.
- Fixed the client VM OS VMDK size from 15.5GB to 16GB; changed the vmdk size specification from decimal format to binary format.
Version 1.5.0.3 Update
- The bug 'When tested against a non-vSAN datastore and having 'Directly Deploy on Hosts' checked, the test won't go through after deployment.' is fixed.
- Enhancement: extract storage policy information when doing pre-validation and checks if the deployment size would be too aggressive to vSAN.
Version 1.5.0.2 Update
- Bug 'test would fail if datastores' name contains white space' fixed.