Tinkering with CryptoMining

If you are in SecOps, you probably have seen the threat of CryptoMiners running on compromised hosts. This article may not be for you, but if you would like to dive deeper inside of the working of crypto-mining you will find a few resources here to get you started.

For the example I use the Moonlander 2 ASIC USB stick as you can pick one up form Amazon for as little as $50 as of March 2020, and it has all the features needed to work with a Raspberry PI. It allows to mine LTC (Litecoin.org).

sudo apt-get install -y build-essential git autoconf automake libtool pkg-config libcurl4-openssl-dev libudev-dev libusb-1.0-0-dev libncurses5-dev raspberrypi-kernel-headers
cd
mkdir miners
cd miners
sudo unzip Linux_3.x.x_4.x.x_VCP_Driver_Source.zip
cd Linux_3.x.x_4.x.x_VCP_Driver_Source
make
sudo cp -a cp210x.ko /lib/modules/`uname -r`/kernel/drivers/usb/serial

Another example is the GekkoScience Bitcoin SHA256 Stick Miner, which lets you test how to mine BTC (bitcoin.org). 

I use the Raspberry PI, as it is a very low cost environment you can build with your children at a very young age. You can do a lot more with it then just teaching about Bitcoin, Blockchain, and mining.

The resources below have all the documentation necessary to get started.

The Moonlander device tends to lock up after a successful run with bfgminerand the miner isn’t detected on successive runs with the status message: 

------------------------------------------
NO DEVICES FOUND: Press 'M' and '+' to add
------------------------------------------

The solution is to remove the driver (cp120x) and unplug / plug back in the USB stick:

  1. sudo rmmod cp120x
  2. unplug USB stick
  3. wait 10 seconds
  4. plug USB stick back in
  5. check if the driver is re-registered by running lsmod | grep usb the output should look like
    usbserial XXXXX X cp210x

Resources

Raspberry Pi 4 on Amazon

Olimex

Moonlander 2 USB Stick ASIC on Amazon

Moonlander 2 USB Stick getting started instructions

Moonlander  ASIC setup on linux

VCP kernel drivers

GekkoScience Bitcoin Miner Setup on Linux / Raspberry Pi

GekkoScience Termius R606 Miner Setup

Raspberry PI dependencies for building kernel modules

ASIC Miner Valuation estimations

US made ASIC miner

 


NewPush Recognized as Top 20 VMware Cloud provider 2016

CIO Review recognition

NewPush started using VMware technologies from its inception in 1999. At the time the first dot com boom was just heating up. Many virtualization technologies were emerging for the Intel platform. Over the years we kept focusing on providing enterprise-grade infrastructure. Meanwhile, we have kept increasing the role of VMware as we understood that for Intel-based hardware VMware provided the most reliable enterprise solutions. As a result, we have moved the use of VMware from our development labs to our production systems and data-centers. Since the 2010’s we are formally a VMware partner providing VMware Cloud solutions. Most noteworthy, the last few years have shown a tremendous growth in the capabilities VMware Cloud delivers. Therefore it is our pleasure to announce that CIO Review has recognized NewPush as a top 20 VMware technology provider.
20 most promising VMware Cloud solution providers - 2016

VMware Cloud Solutions

Important milestone for NewPush

This recognition is a milestone that is important to us. We have worked hard to pioneer and to be successful in deploying state of the art VMware based cloud technologies. Our recent work focuses on NSX, vSAN, and the vRealize suite. As we continue our quest to provide the best cloud services to our customers, we look forward to deploy the new Docker and Hadoop enablement technologies.

Looking ahead

Cloud technologies keep changing at an ever-increasing pace. Companies who stay ahead are going to continue to have a competitive advantage, by providing a better customer experience. By partnering for technology decisions with NewPush, you can spend more time with your core business, while ensuring that you have a trusted partner with a proven track record to help you keep a competitive edge on the IT front. If you would like the NewPush advantage for your company, please do not hesitate to get in touch today. We are here to help 24 hours a day, seven days a week.


Email Hosting: cPanel (Exim) email loop – Too many “Received” headers – suspected mail loop

Email Hosting Issue: email looping on cPanel (Exim)

When your server’s email flow stops, it is like the lifeblood of a company that stops. As soon as an email issue appears we have to jump on them immediately and get to the core of the problem. Smart trouble-shooting is key. At this point, we have to look under the hood of cPanel. cPanel (WHM) is an email hosting and website hosting automation control panel.

Every now and then, you get a cryptic bounce message that drives you to dig deeper. In this case, we first saw “potential email loop” in the bounce message. However, that was not enough. We had to then look at the email logs on cPanel. The place to look was “Track delivery.” Meanwhile, the customer’s email is bouncing, and the pressure is mounting. In the end, we were able to fix the issue quickly.

Problem: email loop detected on email hosting server

You see the following symptom. You send mail to a user on cPanel, and the following error is displayed in the “Track Delivery” section of the users cPanel account:
Too many "Received" headers - suspected mail loop

Solution: fix MX settings of the email hosting control panel (WHM cPanel)

  • Go to the MX record section of cPanel.
  • Reset the delivery method to local.
  • If the method is already set to local, make sure you change it to “backup”, save, and then back to local.

What else can cause a mail loop? Make sure that you do not have a conflict in domain forwarder or email forwarder.

Background

cPanel uses Exim. On Exim, the destination domain is in /etc/localdomains. If the email is stored on a remote server, list the domain in /etc/remotedomains. The steps to take in the solution section act on the MX record editor of cPanel. These steps force cPanel to properly populate these files.

If you have similar issues and have a hard time figuring out the solution, let us know. We are happy to help with any email system, cPanel, Plesk, Domino, or Exchange. Contact us at support at newpush.com, or through the contact us page.


Tips and best practices for migrating a legacy website into Drupal 7 with the Migrate API

drupal_articleLast year, we have migrated a site from a legacy CMS with a lot of custom code into Drupal 7. With the Migrate module (which is now in Drupal 8 core by the way), it was not really hard to do, even though the source database was not really well-designed. The site had around 10k users and 100k entries which ended up being nodes in Drupal, with around 700 videos to migrate. While this is still not a huge migration by any means, it was big enough to make us think about the best practices for Drupal migrations.

We have collected some tips and tricks if you need to do anything similar:

Building the right environment for the migration work

If you would like to work on a migration efficiently, you will need to plan a bit. There are some steps which can save you a lot of time down the road:

  • As always, a good IDE can help a lot. At NewPush, we are using PhpStorm – in that, keeping both the migration code, the source and the target databases opened is straightforward. Also, it is really easy to filter the database table displays looking for a given value which comes really handy. Anyway, any tool that you can use is fine, as long as you are able to quickly check the difference between the source and target versions of the data, which is essential.
  • I know this can be a bit of a pain sometimes, but still: try to ensure that the site can be built from scratch if necessary. This is where techniques like Features step in. I will not explain this in detail since it is outside the scope of the article. Just make sure you ask yourself the question: “what if I really mess up the configuration of this site?” (In a real world scenario you will often need to adjust things like content types, field settings etc. etc. and you will probably need a method for keeping the final configuration in a safe place.)
  • Before getting started, try to figure out a good way for validating the migration. This is kind of easy if you have 20-30 items to move over, but when you need to deal with 50k+ nodes, it is not going to be that trivial – especially if the source data is not very clean.

How to work with your migrations

The next thing to work on is optimizing your day-to-day work. Making sure that you can perform the basic tasks fast is essential, and you will need to figure out the best methods for testing and avoiding errors as well.

  • Use Drush as much as possible. It is faster and less error-prone than UI. There are a lot of handy parameters for migration work. Talking about the parameters, the –feedback and the –limit switches are really handy for quick testing. With the –idlist parameter, you can exactly specify what to import, which is great for checking edge cases.
  • Try to run the full migration from time to time. This is usually not very convenient in a development environment, so having access to another box where you can leave a migration running for hours can make things quite a bit easier.
  • Don’t forget to roll back a migration before modifying the migration code – it is better to avoid database inconsistency issues.

Migration coding tips

Talking about coding the migration itself, there are several things to consider, most of them are pretty basic coding best practices:

  • Try to extract the common functionality into a parent class. You will probably need some cleanup/convert routines; this is the best place for them.
  • Try to document your assumptions. Especially when dealing with not-so-clean data this can be really useful. (Will you remember why the row with ID 2182763 should be excluded from a given migration? I will not, so I always try to add a bunch of comments whenever I find such an edge case.)
  • Use the dd() function provided by Devel – this can make things easier to follow.
  • You will most likely run into the “Class MigrateSomething no longer exists” error at some time. Give it a drush migrate-deregister –orphans and it will go away.

How to optimize the performance of your migration code

It is no surprise that running a full migration often takes a very long time.

  • Other modules can greatly affect the speed of migration/rollback. Try to migrate with only a minimal set of modules to speed things up. This is the easiest way to get some extra speed. (And also if you are on a dev machine, make sure that Xdebug is not active. That is a “great” way to make things much slower.)
  • Using the migrate_instrument_*() functionality, you can figure out the bottlenecks in your migration.
  • In hook_migrate_api(), you can temporarily disable some hooks to speed up the migration.

I hope you have found some useful tips in this article. If you have a question, a tip to add, or perhaps an interesting story about a migration that you have done, just share them in the comments, we are all ears!


Upgrading Android on Samsung Galaxy Pro B7510

The official tool to upgrade the Samsung Galaxy Pro B7510 is Kies, but in most cases, the upgrade can’t be done. Instead there is a tool named Odin:
http://enzag.com/technology/android/how-to-upgrade-samsung-galaxy-pro-b7510-to-gingerbread/

It is recommended to register at SamFirmware where you can download the official firmware, then you can install it using:

http://www.sammobile.com/2014/12/23/firmware-for-samsungs-upcoming-tizen-phone-the-z1-now-online/8/?view=2588

 


Cognos 10.1 install on CentOS 6.3 64 bit

  1. yum update (then reboot if kernel has been patched)
  2. yum install glibc.i686
  3. yum install openmotif
  4. yum install libgcc.i686
  5. yum install openmotif22
  6. yum install openmotif22.i686
  7. yum install xauth
  8. yum install libXtst
  9. tar xvzf bisrvr_linuxi8664h_10.1.1_ml.tar.gz
  10. cd linuxi38664h/
  11. ./isetup

CSF / LDF user / IP lock out info

Where can we view the lock out triggers / logs?

There are a few ways to do it.

The log itself is in /var/log/lfd.log – this provides you with all the information about what lfd is doing. lfd is the process that keeps track of many things: login failures (technically it is called the “login failure daemon”) but also other irregularities on a hosting platform: long running processes, user-run script executions, root logins, things like that.

Another way is to look at the output of the ‘csf -g’ command.
Its full use is: csf -g ip.address.goes.here

This will show you real-time whether or not a certain ip is accepted or dropped (denied). If the IP does not show up when searched this way, then csf/lfd have no blocks or accepts on the IP in particular, at which point server-wide firewall settings will still apply; for example a server-wide deny to a certain port.

To manually unblock, you can use the ‘csf -dr ip.addr.here

More usage information on csf is in the output of the ‘csf‘ command, as well as the author’s website at http://configserver.com/cp/csf.html .


Setting up FileZilla for connecting to the server (first time only)

 

  1. Start FileZilla. I’m using FileZilla 3.5.0 for this tutorial, but the steps should be the same in any recent version.
  2. We will start by creating a new site in FileZilla so that we won’t have to fill the credentials every time we would like to connect to the server. Open File > Site Manager… and click on the New Site button on the left. Edit the name of the site so it’s easy to recognize.
  3. Fill in the form on the right with the following information:
    • Host: [hostname/IP addess]
    • Port: [FTP port]
    • Protocol: FTP File Transfer Protocol
    • Encryption: Require explicit FTP over TLS
    • Logon type: Account
    • User: [your FTP username]
    • Password: [your FTP password]
    • Account: [the domain name for the user in Windows Server] Click on Connect to test the FTP connection.
  4. An Unknown certificate window will pop up. This means that FileZilla recognized that we are using a secure connection and asks whether the information in the certificate is legit. If it is, then click on *OK-. (You might want to check “Always trust certificate in future sessions” so that FileZila won’t ask this again.)
  5. Wait a little until the server processes the login request – it shouldn’t take more than 10 secs on a decent connection. If everything went right, the folders/files will appear on the right pane of FileZilla.

After this, you can use the FTP software as usual for downloading/uploading files to the server.

Connecting to the server

The next time you would like to access the FTP server just…

  1. Start FileZilla.
  2. Open up the Site Manager from the File menu.
  3. Select the saved site from the left and click on Connect.

FileZilla already knows the details of the connection from the steps above, so it should log in without any problem.


Creating an easy to deploy SSL certificate in PEM format

When ordering a secure certificate, most often one has to deal with the following files:

  • certificate key file (aka private key): .key
  • certificate request file: .csr
  • primary certificate file (issued by the CA): .crt
  • certificate chain (aka intermediate certificate, or sf bundle): sf_bundle.crt

As a result, when deploying to a web server, it is necessary to configure 3 files: the key, the cert, and the trust chain. However, a little known fact is that these can be combined in a “pem” file that holds all three. One may even include the trusted root certificate optionally. Here is how:

  • download your certificates (your_domain_name.crt) from your NewPush Customer Portal.
  • paste the entire body of each certificate one by one into one text file in the following order:
    • domain.key
    • domain.crt
    • sf_bundle.crt

    Make sure to include the beginning and end tags on each certificate. The result should look like this:

    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----

The number of

-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

sections will depend of the length of the certificate trust chain.


How to install Tomcat 6 on RHEL 6 or CentOS 6

Here are some steps to install Tomcat 6 on Red Hat 6 (or CentOS 6).

 

First we are going to prepare the repository:


yum install yum-priorities
rpm -Uvh http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
rpm -Uvh http://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm
rpm -Uvh http://mirrors.dotsrc.org/jpackage/6.0/generic/free/RPMS/jpackage-utils-5.0.0-7.jpp6.noarch.rpm

Next we will install Java and Tomcat 6:


yum -y install java
yum -y install tomcat6 tomcat6-webapps tomcat6-admin-webapps

Finally we can launch Tomcat 6:


service tomcat6 start

To connect to Tomcat, just browse to port 8080 on the server, for example:


http://127.0.0.1:8080/

Here are a couple of diagnostic commands to test that Tomcat is running:

# service tomcat6 status
tomcat6 (pid 17318) is running... [ OK ]
# netstat -nlp|grep 800
tcp 0 0 0.0.0.0:8009 0.0.0.0:* LISTEN xxxxx/java
tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN xxxxx/java
# netstat -nlp|grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN xxxxx/java

File Structure

The Red Hat file structure is different than the default file structure Tomcat 6 has when installing from source. Here is the file structure that is used when installing with this method:

/etc/tomcat6 (this is where the main tomcat config files reside)
/usr/share/doc/usr/share/tomcat6
/usr/share/tomcat6/bin
/usr/share/tomcat6/conf
/usr/share/tomcat6/lib
/usr/share/tomcat6/logs
/usr/share/tomcat6/temp
/usr/share/tomcat6/webapps
/usr/share/tomcat6/work
/var/cache/tomcat6
/var/cache/tomcat6/temp
/var/cache/tomcat6/work
/var/lib/tomcat6 (this is where you will add and/or change most of your files)
/var/lib/tomcat6/webapps
/var/log/tomcat6

Here is an article that explains how to add support for JConsole debugging and/or monitoring to Tomcat:
https://wiki.internet2.edu/confluence/display/CPD/Monitoring+Tomcat+with+JMX