Backup and restore vCenter Server Appliance

There are times when things don’t go as smoothly or work as planned. For example, during routine VMware maintenance, while performing a basic step, you lose your VMware vCenter. As a result, you lose the ability to manage important aspects of your infrastructureDepending on your time constraints for the maintenance window, you may have to consider backing out of the changes. This can be a challenge in of itself. Here are some steps that can help. The idea is to have a smooth backup and restore of the vCenter Server appliance. 

The original problem

I mentioned even basic things can go completely wrong. This was the case when I tried to replace the built-in/default SSL Certificate with a standard CA signed one, on a vCenter Server Appliance 6 (VCSA) with the built-in tool, the Certificate Manager. I did follow the instructions on the terminal, but at the end, something went wrong and the tool reported roll-back to the original Certificate. However, that didn’t work either, it never finished and while the certificate seemed to be “in place”, the “vpxd” service didn’t start and that caused the web-client to not load in.

Until this day we are still not sure with the VMware technician what was the root cause, and why the “fix steps” didn’t work. At some point, I decided to just install a new vCenter and restore the original vPostgres database. It is worth to mention, we are using the recommended method with VCSA: 2 appliances, the vCenter appliance and the PSC appliance. The PSC is the most important functionality is that it handles the Single Sign-On, so I’ll refer to them as vCenter and PSC, both need to be operational to make the solution work.

The solution: steps to restore vCenter Server Appliance, including some troubleshooting steps

  1. Take a snapshot of the PSC.
  2. Take note of the vCenter server build number.
    • Connect to vCenter with SSH or console, authenticate and enable shell
    •  Run the following command to get the build of your vCenter:
      vpxd
      You should see something like this:
      VMware VirtualCenter 6.0.0 build-3018523
  3. Backup the vCenter database: Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
    • The following KB article describes the steps: kb.vmware.com/kb/2091961
    • Make sure you don’t mix vCenter Server and vCenter Server Appliance steps. The first is the Windows version, in this case, we need the second, linux one.
    • First, I saw a weird error message when I was running the script. It was because vPostgres was not running, but the error message did not explicitly state that. Make sure to start the vPostgres service by running:
      service-control –start vmware-vpostgres
  4. Decommission the vCenter Server appliance: Using the cmsso command to unregister vCenter Server from Single Sign-On
    • The following KB article describes this step: kb.vmware.com/kb/2106736
    • In case you get an error like this, while you are trying to use WinSCP: “Host is not communicating for more than 15 seconds. If the problem repeats, try turning off ‘Optimize connection buffer size’.” then check out this KB for the solution: kb.vmware.com/kb/2107727
    • In case PSC cannot talk to vCenter Server appliance anymore, which was the case for me, you can still unsub vCenter from the PSC with the following command, that you need to run on PSC:
      /usr/lib/vmware-vmdir/bin/vdcleavefed -h HOSTNAME_OR_IP -u administratorYou should see something like this: “vdcleavefd offline for server HOSTNAME_OR_IP
      Leave federation cleanup done”Replace HOSTNAME_OR_IP with the correct hostname/ip of your vCenter appliance.
  5. Re-deploy a new vCenter with the same Build number (you noted in step 2), and Networking settings.
  6. Recover the vCenter database: Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
    The following KB article describes this step: kb.vmware.com/kb/2091961
  7. Make sure all services are starting, and vCenter properly working.

If you have questions or have comments feel free to ask below. If you need help, contact us at support@newpush.com.


Newsletter – November 2016

NewPush newsletter – November 2016 edition

I would like to welcome you to our newsletter of November 2016.

René - Newsletter welcomeDid you know that NewPush has been in business since 1999 providing stable and reliable service to our customers in both North America and Europe? Our cloud vision was implemented many years prior to the time the term ‘cloud’ was coined – we simply called it ‘Nethosting.’ Today, we compete successfully with major cloud providers by providing more support and superior services at a lower cost. Our easy-to-understand pricing results in predictable costs. Our pricing is geared to offer higher value than the likes of Amazon, Google, or Microsoft. We do that by tying the price to the value of our services. You don’t have to be at the mercy of some complex and opaque formula based on uncontrollable metrics. Forget about worrying what peak hours or periods of high server loads mean. We know that these are virtually impossible to verify and most importantly impossible to predict.

In addition to hosting and cloud services, we also provide help with software and website development. We offer a single point of contact for support regardless of the services, making it simple, and saving time. Our recent offerings popular with developers and system integrators include testing as a service (TaaS) and cybersecurity as a service. Both provide a competitive advantage to online operations. We serve private and government entities in locations around the world. Our business shows strong growth due to our simple business proposition: we deliver quality on time and within budget. Simply put: we deliver more for less!

In this day and age, the basics are taken for granted. To differentiate our offering, NewPush includes complimentary value-added features. Here are three examples that show how these features can make a difference for businesses.

  • Spend less time on junk mail and more time with what is important. We have improved on standard spam filtering by deploying a professional solution powered by SpamExperts (SPEX). The spam filtering cluster has been custom tailored to fit each of our customer’s needs. Whether you are using a control panel, or an Exchange cluster, with SPEX we increase the filtering efficiency and allow you to manage global filtering rules as well as personal preferences. Our system even learns from feedback based on what millions of users choose to release or block in their inbox. As a result, you are able to spend more time on what is really important.
  • Increase your audience by keeping your website current and relevant. Many customers love to use a simple content management system (CMS) like WordPress. It is easy and efficient. We made sure that our support engineers are trained to provide help and guidance with WordPress website development and maintenance to help you stay relevant. In addition, we can provide advice and support for keeping your website safe too.
  • Cyber Security continues to gain in importance. At NewPush we design our infrastructure and build our service with cybersecurity in mind. Our high BitSight score (the industry standard for Cyber Security) is a testament to our commitment and our results. The key to success in Cyber Security is to treat is like an arms race. We always search for ways to improve security by leveraging technology and experience. Besides industry best practices, we develop, test, and deploy new tools to monitor and protect our network. We proactively fix security issues, to head off problems before they impact our customers.
    These were just a few examples of how we differentiate our services to give you a competitive advantage in IT. And as IT needs are shifting, new challenges appear. We aim to provide a stable platform, with trustworthy and professional support.

Many of you know us from the work we have been doing for you. For others, we become invisible because ‘it just works.’ And in some cases, you know us from the work we have done for you, but you don’t know of other areas where we could help you achieve a business advantage. We would like to fix that – hence this and future newsletters. We plan to share with you on regular basis news, offerings, blogs, and discussions on the topics of interest in this fast moving IT space. We know you have a choice of vendors, and we feel privileged to have the opportunity to earn your confidence and trust every day. We hope you will enjoy reading this and future newsletters.

Sincerely,
Rene Sotola, CEO

Our featured customer for this month: The O’Shea Report

Kris and Tim O’Shea started their careers in HR, training, and sales by day. They honed their comedic skills in theatres and comedy clubs by night. Their experiences enable them today to be motivational speakers who understand people’s daily strife in a corporate environment. Their mission is to boost morale among employees while motivating them to thrive in a challenging world. The O’Sheas have been voted among the Top 5 Entertainment Keynote Speakers four years in a row by Speakers Platform and have presented The O’Shea Report to corporate audiences, such as General Motors, McDonald’s, State Farm, Panasonic, and Blue Cross Blue Shield. We are proud to have provided our small business hosting solutions to them for over a decade.

Tim, co-founder of The O’Shea Report explains it best:

“NewPush has been a solid and reliable partner of ours for over 15 years. They provide us with all necessary IT foundations to run our business: website hosting, email, spam filtering and virus protection. Their performance has consistently exceeded our expectations. They go the extra mile whenever it is needed and always respond quickly and efficiently if we need assistance. There is a reason we have been loyal customers of theirs for so many years: they provide outstanding service. Thank you NewPush!”

Success story: Web Hosting services for The O’Shea Report

“Providing essential hosting services (our website, email, spam filtering, and virus protection) enables us to focus on our business,” says Tim O’Shea, co-founder of The O’Shea Report. “What makes NewPush different from others in hosting? In one word: caring” he adds.

At NewPush, we believe that the foundation for good service is caring about the customer. We understand that to best serve our customers’ needs we need to take their perspective. This is exactly what we have done hosting for Kris and Tim O’Shea for more than 15 years now.

Product highlight: Automated testing

Functional testing is an industry-wide challenge. As timelines slip, testing gets shortened. Subsequently, test coverage (defined as the percentage of function points) decreases and the risk goes up. While most software bugs don’t result in a loss of life, they can result in a loss of reputation and a damaged brand. Frequent tests uncover bugs quickly and allow intensive testing and increased test coverage while minimizing costs and elapsed time.

Manual testing is slow and expensive. Industry automated testing relies on scripts which – while better than manual testing as they allow some automation – are still relatively slow to write and hard to maintain, and the scriptwriter becomes a bottleneck.

NewPush provides testing as a service. Using leading industry tools – CGI’s TestSavvy – integrated with different “engines” (e.g. HP’s QTP, Selenium) NewPush can ‘automate the automation’: specifically, test cases are easily created by GUIs, and then test execution can be performed by anyone, or started on a particular date/time automatically.

Users can choose to execute the test immediately or schedule the execution for the future. The color coding makes the test results easy to understand. Tests can be grouped into subsystems corresponding to functional areas, or software modules, or submodules. The reader can receive a summary (e.g. just failed tests) or look at all tests executed and then drill down on specific test results using GUIs. NewPush provides this solution as a TaaS offering (a combination of SaaS and testing services).


Tips and best practices for migrating a legacy website into Drupal 7 with the Migrate API

drupal_articleLast year, we have migrated a site from a legacy CMS with a lot of custom code into Drupal 7. With the Migrate module (which is now in Drupal 8 core by the way), it was not really hard to do, even though the source database was not really well-designed. The site had around 10k users and 100k entries which ended up being nodes in Drupal, with around 700 videos to migrate. While this is still not a huge migration by any means, it was big enough to make us think about the best practices for Drupal migrations.

We have collected some tips and tricks if you need to do anything similar:

Building the right environment for the migration work

If you would like to work on a migration efficiently, you will need to plan a bit. There are some steps which can save you a lot of time down the road:

  • As always, a good IDE can help a lot. At NewPush, we are using PhpStorm – in that, keeping both the migration code, the source and the target databases opened is straightforward. Also, it is really easy to filter the database table displays looking for a given value which comes really handy. Anyway, any tool that you can use is fine, as long as you are able to quickly check the difference between the source and target versions of the data, which is essential.
  • I know this can be a bit of a pain sometimes, but still: try to ensure that the site can be built from scratch if necessary. This is where techniques like Features step in. I will not explain this in detail since it is outside the scope of the article. Just make sure you ask yourself the question: “what if I really mess up the configuration of this site?” (In a real world scenario you will often need to adjust things like content types, field settings etc. etc. and you will probably need a method for keeping the final configuration in a safe place.)
  • Before getting started, try to figure out a good way for validating the migration. This is kind of easy if you have 20-30 items to move over, but when you need to deal with 50k+ nodes, it is not going to be that trivial – especially if the source data is not very clean.

How to work with your migrations

The next thing to work on is optimizing your day-to-day work. Making sure that you can perform the basic tasks fast is essential, and you will need to figure out the best methods for testing and avoiding errors as well.

  • Use Drush as much as possible. It is faster and less error-prone than UI. There are a lot of handy parameters for migration work. Talking about the parameters, the –feedback and the –limit switches are really handy for quick testing. With the –idlist parameter, you can exactly specify what to import, which is great for checking edge cases.
  • Try to run the full migration from time to time. This is usually not very convenient in a development environment, so having access to another box where you can leave a migration running for hours can make things quite a bit easier.
  • Don’t forget to roll back a migration before modifying the migration code – it is better to avoid database inconsistency issues.

Migration coding tips

Talking about coding the migration itself, there are several things to consider, most of them are pretty basic coding best practices:

  • Try to extract the common functionality into a parent class. You will probably need some cleanup/convert routines; this is the best place for them.
  • Try to document your assumptions. Especially when dealing with not-so-clean data this can be really useful. (Will you remember why the row with ID 2182763 should be excluded from a given migration? I will not, so I always try to add a bunch of comments whenever I find such an edge case.)
  • Use the dd() function provided by Devel – this can make things easier to follow.
  • You will most likely run into the “Class MigrateSomething no longer exists” error at some time. Give it a drush migrate-deregister –orphans and it will go away.

How to optimize the performance of your migration code

It is no surprise that running a full migration often takes a very long time.

  • Other modules can greatly affect the speed of migration/rollback. Try to migrate with only a minimal set of modules to speed things up. This is the easiest way to get some extra speed. (And also if you are on a dev machine, make sure that Xdebug is not active. That is a “great” way to make things much slower.)
  • Using the migrate_instrument_*() functionality, you can figure out the bottlenecks in your migration.
  • In hook_migrate_api(), you can temporarily disable some hooks to speed up the migration.

I hope you have found some useful tips in this article. If you have a question, a tip to add, or perhaps an interesting story about a migration that you have done, just share them in the comments, we are all ears!