Cyber Security Advice for Medical Practices

The sudden increase in cyberattacks happening all around the world is not without its reasons. More than 80% of information – including private details about ourselves – are now stored digitally. Every information is valuable to attackers, which is why we are now seeing more attacks as well as new forms of attacks targeting individuals and large corporations.Cybersecurity for medical practice

For medical practices, information security is essential. Patient information and details about the practice’s operations are too valuable to handle carelessly. There are ways to improve cybersecurity throughout your medical practice and we are going to discuss some of them in this article.

Follow the Standards

The healthcare industry is highly regulated down to the last letter and information security is no exception. The HIPAA medical information security guidelines are something that every healthcare service provider must follow.

Fortunately, most solutions available to the industry already take HIPAA compliance very seriously. You know you can count on the software, devices, and other solutions that comply with HIPAA to safeguard your information. Following the correct security standards is a great first step to take.

Secure the Equipment

Using the correct, well-secured equipment is another must. You can’t count on poorly secured equipment, especially in today’s world where attacks to IoT and electronic devices are more common than ever. Similar to choosing software and solutions, there are standards to follow.

According to Rishin Patel Insight Medical Partners’ President and CEO, newer equipment is designed to be more secure from the ground up, especially compared to older alternatives. His company provides easy access to the most advanced products and technologies so that medical practices can remain safe and protected.

Have a Backup Routine

To have a strong information security foundation, the third thing you need to add is a good backup routine. Maintain on-site and off-site (cloud) backups of sensitive information so that your medical practice can recover from catastrophic cyberattack seamlessly.

In the event of a ransomware attack, for instance, you can wipe your computers and restore essential data from various sources. When hardware fails, there is still a cloud backup to turn to. Adding a good backup routine to the practice’s everyday workflow completes the equation and provides your medical practice with a good security foundation.

Train the People

Once the foundation is laid, it is time to tackle the biggest information security challenge of them all: the people. Bad habits like using a weak or common password, exchanging login information or user access with coworkers, clicking URLs from illegitimate sources, and copying data to a flash drive and then not handling it properly are still the most common causes of cyberattacks.

It is imperative that the people involved in handling information know how to handle information securely. Information security trainings are great for changing some of the more common bad habits quickly. As an extra layer of security, putting in place a set of security policies is also highly recommended.

There are still so many things you can do to protect your medical practice from cyberattacks, but these first steps are the ones to take to get started. Be sure to implement these measures immediately before your practice becomes the victim of a cyberattack.


Data Loss: The Impact It Has On Businesses

There are no boundaries to data loss, it happens to companies of all shapes and sizes, from large corporations to small startups. The main issue with data loss is the fact that it can strike at any time, resulting in a domino-like effect of serious consequences for the business.

Wondering how data loss can impact your business? Below are some examples of the seriousness that a loss of data can cause for a company. Data loss

Productivity disruption

Should your organization lose data, one of the first things to suffer will be workplace productivity. Whether the loss of data has been caused by a computer hacking, network outage or failure of software or hardware, it can have a serious impact on your business’s productivity, as it can take hours, or sometimes even days to get your lost data back. However, should you choose to invest in and use a professional data backup service like MySql data recovery, you can make the process of getting your company back up and running after a data loss, much easier.

Reputation damage 

Of course, one area of your business that it’s less easy to fix after this kind of disaster is your reputation. In the digital world, news travels fast, so if your company ends up on the news due to its website being down or files being missing, you will have customers asking questions about what’s happened, and your answers may cause long-term damage to your company. When data loss occurs, customers feel let down, because it’s their private and confidential data that is on the line, as well as yours. So when your company loses that data, it puts your reputation on the line, and can have a long-lasting impact on your business and its success in the future.

Loss of customer loyalty

After a data loss event, customer loyalty is often also tarnished. You customers feel like they can’t trust your business with their sensitive information, and so they choose to take their money elsewhere. Once word spreads about this, you may struggle to find new clients, which could have a huge impact on your business and its success. This is something that no one wants to happen, as it can have such a huge impact on your business’s success. Of course, while you could lose customers as a result, you could also be creative and find ways to win them back, like Virgin did after their big data loss in 2017. They apologised, put better safeguards in place for data, and offered everyone affected a cheaper deal on their services.

While data losses can be a total nightmare for businesses of all shapes and sizes, suffering a data loss doesn’t have to mean the end of your company. It simply means being smart about the next steps that you take, and making sure that you find ways to retain your customers and gain new ones, despite the breach in security and the lack of customer confidence in you and your brand.


NewPush Recognized as Top 20 VMware Cloud provider 2017

CIO Review recognition

NewPush started using VMware technologies from its inception in 1999. At the time the first dot com boom was just heating up. Many virtualization technologies were emerging for the Intel platform. Over the years we kept focusing on providing enterprise grade infrastructure. Meanwhile, we have kept increasing the role of VMware as we understood that for Intel based hardware VMware provided the most reliable enterprise soluitons. As a result, we have moved the use of VMware from our development labs to our production systems and data-centers. Since the 2010’s we are formally a VMware partner providing VMware Cloud solutions. Most noteworthy, the last few years have shown a tremendous growth in the capabilities VMware Cloud delivers. Therefore it is our pleasure to announce that once again, CIO Review has recognized NewPush as a top 20 VMware technology provider.
20 most promising VMware Cloud solution providers - 2016

VMware Cloud Solutions

Important milestone for NewPush

This recognition for the second time in a row is a milestone that is important to us. We have worked hard to pioneer and to be successful deploying state of the art VMware based cloud technologies, and we have worked harder even to maintain a leadership position in this crowded space. Our work continues to focuses on NSX, vSAN, and the vRealize suite. As we continue our quest to provide the best cloud services to our customers, we look forward to deploy advanced analytics capabilities centered around Splunk Enterprise security essentials.

Forward looking posture

Cloud technologies keep changing at an ever increasing pace. In this year’s edition of CIO Review, we dive deeper in iGRACaaS, identity governance, risk and compliance as a service. Companies who stay ahead are going to continue to have a competitive advantage, by providing a better customer experience. By partnering for technology decisions with NewPush, you can spend more time with your core business, while ensuring that you have a trusted partner with a proven track record to help you keep a competitive edge on the IT front. If you would like the NewPush advantage for your company, please do not hesitate to get in touch today. We are here to help 24 hours a day, seven days a week.


Connecting local Active Directory Cloud (AD) and Azure

Active Directory Cloud Enablement

Connecting local AD to Azure

Active Directory Cloud Simplifies user Access (Microsoft)With the deployment of more and more Office 365 services, managing separate AD instances can be daunting.  Fortunately Microsoft offers great tools to get your Active Directory Cloud initiative working. Azure’s AD is the backing AD for the Office 365 services. In this article, I am providing a summary of the key points to remember when connecting to Azure’s AD.

Microsoft provides a very powerful set of tools to easily connect a local Active Directory to Azure. There are also some advanced options available if you decide to use Azure as a full blown AD server for your organization. However, it is important to be very careful. Here is what can happen if the connection isn’t done right: most if not all of the users will be locked out of their account. That means, no email (Outlook), no SharePoint, no OneDrive.

 The key is to configure the ADD connect tool with a custom setting in order to make sure that the local domain doesn’t take over the Office 365 domain. The following steps assume that you have Office 365 deployed for your main domain. For example, NewPush.com is our main domain. 

Quick summary to connect the Active Directory Cloud 

1)    Check that all your local users have their email address set up properly in the “mail” attribute of your local AD. At this stage, you should also make sure that you have an Office 365 account set up with Global Admin privileges, and on the default Microsoft domain (e.g. globaladmin@yourdomain.onmicrosoft.com.

2)    Installing the ADD. This is straightforward, however, make sure to not finish the install with the defaults, as we modify the sync rules in the next step. If you already installed, and have the wrong settings, you need to uninstall, reboot, and reinstall.

3)     Select custom synchronization setting and select the mail attribute as UPN for sync which results in your main domain remaining the one used on Office 365. 

References for Active Directory to Azure Connection

1)      http://www.microsoft.com/en-us/download/details.aspx?id=47594

2)      https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-get-started-custom   Custom installation of ADD connect Start to end. 

 Please let me know if you found these instructions helpful, and do not hesitate to send me feedback.

 


The Growth in Geographic Information Technology

The Growth in Geographic Information Technology

Every day, a staggering amount of new data is produced, and business and governments look for ways to utilize that data to help them plan and strategize, cut costs, innovate, and deliver services. While some companies and organizations are working at the cutting edge of data implementation, many companies are still lagging far behind when it comes to making optimum use of that data.

Geographic information technology has emerged as a critical data implementation tool that allows companies and governments to drill deeply into their data and use that data to help them determine their future direction.

What is Geographic Information Technology?

Geographic information technology allows data to be analyzed and then displayed on maps. It can be used to analyze anything from locations where it would be most profitable for a restaurant chain to open up a new outlet, finding out which areas of a city are responsible for the most pollution, and tracking disease outbreaks.

How Does a Geographic Information System Work?

A GIS system captures, stores, and analyzes data related to its position on the planet’s surface. Many different kinds of data can be displayed on one map, which allows patterns and relationships between elements to be discovered.

The location data used can be cross-referenced with other information in the database to show information about the people in that place, such as their education levels, income levels and life expectancy. Another example would be mapping locations of water or air pollution and then looking for the source based on all of the farms and factories in a given area.

An organization will need specialists holding a geographic information science and technology degree in order to implement and administer a GIS system. Members of a company’s IT team can also undertake a part time online GIST program to acquire the relevant education and skills.

Which Types of Organization Can Benefit From Using a Geographic Information System?

Because the benefits of understanding location data are so far reaching, a wide range of companies and government departments can gain from using a geographic information system.

The US Department of Defense uses analysis provided by data specialists at National Geospatial-Intelligence Agency to identify potential threats to national security.

The Center for Disease Control and Prevention uses GIS to help shape public health related policy decisions.

Local government bodies provide location data mapping to various departments, including those responsible for public utilities and services, property tax assessments, highways, and town planning. For example, accident hotspots can be identified and steps taken to minimize risks in those areas. Location data can also help to develop a local tourism strategy, or plan new residential communities.

Companies can use a geographic information system to manage their supply and distribution chain more effectively, to plan new locations based on local income levels and population density, and the road and other transport networks that make traveling to the location feasible.

Geographic information technology enables a better understanding of data and can ensure smarter use of resources and more efficient targeting of services, both in the public and the private sector.

Reference

http://gis.usc.edu/msp-resources/articles-blogs/reasons-why-gis-Matters/

 

 


Backup and restore vCenter Server Appliance

There are times, when things don’t go as smoothly or work as planned. For example, during routine VMware maintenance, while performing a basic step, you lose your VMware vCenter. As a result, you lose the ability to manage important aspects of your infrastructureDepending on your time constraints for the maintenance window, you may have to consider backing out of the changes. This can be a challenge in of itself. Here are some steps that can help. The idea is to have a smooth backup and restore of the vCenter Server appliance. 

The original problem

I mentioned even basic things can go completely wrong. This was the case when I tried to replace the built-in / default SSL Certificate with a standard CA signed one, on a vCenter Server Appliance 6 (VCSA) with the build-in tool, the Certificate Manager. I did follow the instructions on terminal, but at the end, something went wrong and the tool reported roll-back to the original Certificate. However, that didn’t work either, it never finished and while the certificate seemed to be “in place”, the “vpxd” service didn’t start and that caused the web-client to not load in.

Until this day we are still not sure with the VMware technician what was the root cause, and why the “fix steps” didn’t work. At some point, I decided to just install a new vCenter and restore the original vPostgres database. It is worth to mention, we are using the recommended method with VCSA: 2 appliances, the vCenter appliance and the PSC appliance. The PSC is most important functionality is that it handles the Single Sign-On, so I’ll refer to them as vCenter and PSC, both need to be operational to make the solution work.

The solution: steps to restore vCenter Server Appliance, including some troubleshooting steps

  1. Take a snapshot of the PSC.
  2. Take note of the vCenter server build number.
    • Connect to vCenter with SSH or console, authenticate and enable shell
    •  Run the following command to get the build of your vCenter:
      vpxd
      You should see something like this:
      VMware VirtualCenter 6.0.0 build-3018523
  3. Backup the vCenter database: Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
    • The following KB article describes the steps: kb.vmware.com/kb/2091961
    • Make sure you don’t mix vCenter Server and vCenter Server Appliance steps. The first is the Windows version, in this case we need the second, linux one.
    • First, I saw a weird error message when I was running the script. It was because vPostgres was not running, but the error message did not explicitly stated that. Make sure to start the vPostgres service by running:
      service-control –start vmware-vpostgres
  4. Decommission the vCenter Server appliance: Using the cmsso command to unregister vCenter Server from Single Sign-On
    • The following KB article describes this step: kb.vmware.com/kb/2106736
    • In case you get an error like this, while you are trying to use WinSCP: “Host is not communicating for more than 15 seconds. If the problem repeats, try turning off ‘Optimize connection buffer size’.” then check out this KB for the solution: kb.vmware.com/kb/2107727
    • In case PSC cannot talk to vCenter Server appliance anymore, which was the case for me, you can still unsub vCenter from the PSC with the following command, that you need to run on PSC:
      /usr/lib/vmware-vmdir/bin/vdcleavefed -h HOSTNAME_OR_IP -u administratorYou should see something like this: “vdcleavefd offline for server HOSTNAME_OR_IP
      Leave federation cleanup done”Replace HOSTNAME_OR_IP with the correct hostname/ip of your vCenter appliance.
  5. Re-deploy a new vCenter with the same Build number (you noted in step 2), and Networking settings.
  6. Recover the vCenter database: Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
    The following KB article describes this step: kb.vmware.com/kb/2091961
  7. Make sure all services are starting, and vCenter properly working.

If you have questions or have comments feel free to ask below. If you need help, contact us at support@newpush.com.


NewPush Recognized as Top 20 VMware Cloud provider 2016

CIO Review recognition

NewPush started using VMware technologies from its inception in 1999. At the time the first dot com boom was just heating up. Many virtualization technologies were emerging for the Intel platform. Over the years we kept focusing on providing enterprise grade infrastructure. Meanwhile, we have kept increasing the role of VMware as we understood that for Intel based hardware VMware provided the most reliable enterprise soluitons. As a result, we have moved the use of VMware from our development labs to our production systems and data-centers. Since the 2010’s we are formally a VMware partner providing VMware Cloud solutions. Most noteworthy, the last few years have shown a tremendous growth in the capabilities VMware Cloud delivers. Therefore it is our pleasure to announce that CIO Review has recognized NewPush as a top 20 VMware technology provider.
20 most promising VMware Cloud solution providers - 2016

VMware Cloud Solutions

Important milestone for NewPush

This recognition is a milestone that is important to us. We have worked hard to pioneer and to be successful deploying state of the art VMware based cloud technologies. Our recent work focuses on NSX, vSAN, and the vRealize suite. As we continue our quest to provide the best cloud services to our customers, we look forward to deploy the new Docker and Hadoop enablement technologies.

Looking ahead

Cloud technologies keep changing at an ever increasing pace. Companies who stay ahead are going to continue to have a competitive advantage, by providing a better customer experience. By partnering for technology decisions with NewPush, you can spend more time with your core business, while ensuring that you have a trusted partner with a proven track record to help you keep a competitive edge on the IT front. If you would like the NewPush advantage for your company, please do not hesitate to get in touch today. We are here to help 24 hours a day, seven days a week.


Email Hosting: cPanel (Exim) email loop – Too many “Received” headers – suspected mail loop

Email Hosting Issue: email looping on cPanel (Exim)

When your server’s email flow stops, it is like the life blood of a company that stops. As soon as an email issue appears we have to jump on them immediately and get to the core of the problem. Smart trouble-shooting is key. At this point we have to look under the hood of cPanel. cPanel (WHM) is an email hosting and website hosting automation control panel.

Every now and then, you get a cryptic bounce message that drives you to dig deeper. In this case, we fist saw “potential email loop” in the bounce message. However, that was not enough. We had to then look at the email logs on cPanel. The place to look was “Track delivery.” Meanwhile the customer’s email are bouncing, and the pressure is mounting. In the end we were able to fix the issue quickly.

Problem: email loop detected on email hosting server

You see the following symptom. You send mail to a user on cPanel, and the following error is displayed in the “Track Delivery” section of the users cPanel account:
Too many "Received" headers - suspected mail loop

Solution: fix MX settings of the email hosting control panel (WHM cPanel)

  • Go to the MX record section of cPanel.
  • Reset the delivery method to local.
  • If the method is already set to local, make sure you change it to “backup”, save, and then back to local.

What else can cause a mail loop? Make sure that you do not have a conflict in domain forwarder or email forwarder.

Background

cPanel uses Exim. On Exim, the destination domain is in /etc/localdomains. If the email is stored on a remote server, list the domain in /etc/remotedomains. The steps to take in the solution section act on the MX record editor of cPanel. These steps force cPanel to properly populate these files.

If you have similar issues, and have a hard time figuring out the solution, let us know. We are happy to help with any email system, cPanel, Plesk, Domino, or Exchange. Contact us at support at newpush.com, or through the contact us page.


Tips and best practices for migrating a legacy website into Drupal 7 with the Migrate API

drupal_articleLast year, we have migrated a site from a legacy CMS with a lot of custom code into Drupal 7. With the Migrate module (which is now in Drupal 8 core by the way), it was not really hard to do, even though the source database was not really well-designed. The site had around 10k users and 100k entries which ended up being nodes in Drupal, with around 700 videos to migrate. While this is still not a huge migration by any means, it was big enough to make us think about the best practices for Drupal migrations.

We have collected some tips and tricks if you need to do anything similar:

Building the right environment for the migration work

If you would like to work on a migration efficiently, you will need to plan a bit. There are some steps which can save you a lot of time down the road:

  • As always, a good IDE can help a lot. At NewPush, we are using PhpStorm – in that, keeping both the migration code, the source and the target databases opened is straightforward. Also, it is really easy to filter the database table displays looking for a given value which comes really handy. Anyway, any tool that you can use is fine, as long as you are able to quickly check the difference between source and target versions of the data, which is essential.
  • I know this can be a bit of a pain sometimes, but still: try to ensure that the site can be built from scratch if necessary. This is where techniques like Features step in. I will not explain this in detail, since it is outside the scope of the article. Just make sure you ask yourself the question: “what if I really mess up the configuration of this site?” (In a real world scenario you will often need to adjust things like content types, field settings etc. etc. and you will probably need a method for keeping the final configuration in a safe place.)
  • Before getting started, try to figure out a good way for validating the migration. This is kind of easy if you have 20-30 items to move over, but when you need to deal with 50k+ nodes, it is not going to be that trivial – especially if the source data is not very clean.

How to work with your migrations

The next thing to work on is optimizing your day-to-day work. Making sure that you can perform the basic tasks fast is essential, and you will need to figure out the best methods for testing and avoiding errors as well.

  • Use Drush as much as possible. It is faster and less error-prone than the UI. There are a lot of handy parameters for migration work – check out this documentation page for details. Talking about the parameters, the –feedback and the –limit switches are really handy for quick testing. With the –idlist parameter, you can exactly specify what to import, which is great for checking edge cases.
  • Try to run the full migration from time to time. This is usually not very convenient in a development environment, so having access to another box where you can leave a migration running for hours can make things quite a bit easier.
  • Don’t forget to roll back a migration before modifying the migration code – it is better to avoid database inconsistency issues.

Migration coding tips

Talking about coding the migration itself, there are several things to consider, most of them are pretty basic coding best practices:

  • Try to extract the common functionality into a parent class. You will probably need some cleanup / convert routines; this is the best place for them.
  • Try to document your assumptions. Especially when dealing with not-so-clean data this can be really useful. (Will you remember why the row with ID 2182763 should be excluded from a given migration? I will not, so I always try to add a bunch of comments whenever I find such an edge case.)
  • Use the dd() function provided by Devel – this can make things easier to follow.
  • You will most likely run into the “Class MigrateSomething no longer exists” error at some time. Give it a drush migrate-deregister –orphans and it will go away.

How to optimize the performance of your migration code

It is no surprise that running a full migration often takes a very long time.

  • Other modules can greatly affect the speed of migration / rollback. Try to migrate with only a minimal set of modules to speed things up. This is the easiest way to get some extra speed. (And also if you are on a dev machine, make sure that Xdebug is not active. That is a “great” way to make things much slower.)
  • Using the migrate_instrument_*() functionality, you can figure out the bottlenecks in your migration.
  • In hook_migrate_api(), you can temporarily disable some hooks to speed up the migration. This article on Drupal.org explains it in details.

I hope you have found some useful tips in this article. If you have a question, a tip to add, or perhaps an interesting story about a migration that you have did, just share them in the comments, we are all ears!


5 Reasons 2016 Will Be the Year of the “New IT”

By Thor Olavsrud –

Amit Pandey, CEO of cloud application delivery specialist Avi Networks, says 2016 will be the year of the “new IT”.  The confluence of cloud environments and applications, along with technologies like software-defined networking, are going to rewrite the rules for IT in 2016, he says.

The transformation has been underway for some time, but Pandey says that in 2016, CIOs and other tech leaders will have to revamp the way they approach IT if they want their companies to keep pace and remain competitive. They’ll need to purge legacy systems that hamper their agility and embrace digital transformation in the form of the Internet of Things (IoT), artificial intelligence and other “third-platform technologies” (as research firm IDC calls them).

Here are Pandey’s five “new IT” predictions for 2016.

1. Application owners will own IT

In 2016, DevOps and IT will become synonymous, Pandey says. CIOs will have to adopt an app-centric mindset or risk losing influence. That means application owners will be in the driver’s seat when it comes to choosing the tools, techniques and skills they’ll need. IT will have to get behind providing application owners with self-service capabilities.

“LOBs are now key influencers of IT strategy and demand a seamless user experience for their teams and customers,” Pandey says. “‘Throw-over-the-wall’ provisioning of application services on traditional data center infrastructure will give way to application services (like security, load balancing and analytics) and closer to app development time. Sizeable IT investment and CIO mindshare will be applied to developer-friendly, software-driven choices and consumption base (as opposed to large capital equipment) platforms and services.”…

 

Read more here:

http://www.cio.com/article/3014674/innovation/5-reasons-2016-will-be-the-year-of-the-new-it.html