If your organization is like most, you’ve probably been asked to do a whole lot more with a whole lot less. Budgets get slashed, corners get cut, and the result sometimes comes at a cost that has little to do with dollars.
Take the case of a recent client, for example. This customer had 23 production databases deployed on a
X2-2 1/4th rack Exadata server that was initially configured in 2012. But because of multiple factors—cutbacks, skill and resource gaps, resource shortages—the Exadata server had not been patched for last two years.
As a result, there were a lot of unknowns: the disk state for cell2 storage server as well as the software states of Storage Servers, InfiniBand, Database Server OS, Cluster and database were all unknown. Plus, there was an increased risk that something would fail, bringing business to a halt and costing real dollars.
Not only were we challenged to upgrade the X2-2 environment to the latest patchset for Oracle cluster and Oracle database, we had to migrate the environment with minimal downtime. We also had to perform the patching at a local database center with secured U.S. citizen access due to security concerns.
We successfully patched the Exadata server to the latest patchset using Oracle and industry best practices. The patching process included:
- Holistic discovery of the Exadata environment – Ran theOracle Exadata health script and reviewed the Oracle GRID infrastructure environment, the YUM repository and the current patchset. We then identified the latest patchsets and planned the patch procedure.
- Patch pre-verification – Verified the Exadata server environment, the cell disk and grid server, the current patchset for Grid and database environment and the YUM repository. We then downloaded all software patch distributions and staged it on server.
- Patch install – Installed non-rolling Storage Server patch, InfiniBand patch, Database Server OS patch, Grid server (software only) and out-of-place upgrade, Database Server software, and the latest patches on Grid infrastructure and database software to bring the system up to date.
- Post-install verification – Tested the database server to ensure predictable installation and stability by running the Oracle health scripts one last time.
Patch Install Steps
- Shutdown all databases normallySrvctl stop database –d <dbname>
- Shutdown Oracle clusterLogin as the root userdcli -g dbs_group -l root “/u01/app/22.214.171.124/grid/bin/crsctl stop crs -f”dcli -g dbs_group -l root “/u01/app/126.96.36.199/grid/bin/crsctl disable crs”dcli -g dbs_group -l root “ps -ef | grep grid”>>>> kill all Oracle hanging process using command line kill -9 command
- Verify ssh connectivity of root user from db server to storage serverssh vaopcel01 date works without password promptssh vaopcel02 date works without password promptssh vaopcel03 date works without password prompt
- Verify status of storage server environmentdcli -g cell_group -l root ‘hostname -i’ssh vaopcel01cellcli>list griddisk>list celldisk>> ensure status is normal for all disks; repeat for all cell servers (vaopcel02, vaopcel03)
- Stage all software in staging area
* Storage, OS and infiniband patches
* patch 19166601 storage server and Infiniband software
* patch 18876946 – Database server ULN ISO image
* patch 16486998 – dbnodeupdate.sh script* Grid and database environment patches
* 188.8.131.52.0 base install – patch 13390677
* August 2014 PSU – 19023390
* patch utility 6880880
Storage server install (150 minutes)
Login in as root on the database server with ssh access to all cell server
InfiniBand patch install (90 minutes)
Login as root on database server;
Database server OS patch install (50 minutes for each DB server)
Login to as root on each DB servers
With the patching process successfully completed, we then turned our attention to routine maintenance and optimization. Our client did not want to wait another two years—and take the chance on an expensive disk failure or security breach—to bring their system up to date. So we worked with them to routinely monitor their system, perform periodic health checks, and schedule quarterly patching.
They now have all the benefits of Exadata fixes, including better performance, reduced risk, and lower cost. But they also got something else from us: an extra hand with IT maintenance, without the added staffing costs, so they can really focus on initiatives that add value to their core business.
- Top 3 Reasons to Outsource Your Exadata Patching
- Bloom Filter Offloading and Impact on Exadata
- Top Features & Benefits of Oracle Exadata Database Machine X5-2
Have resource or skill gaps left little time for Oracle Exadata maintenance, putting your systems and business at risk? MiCORE’s certified Oracle engineers and consulting professionals can seamlessly fill resource or skill gaps to help you perform routine maintenance, optimize your environment and implement best practices—all so you have the time needed to drive strategic initiatives.
Click here to learn more.