Resolving Networking Problems (Performance Problems) of a WD My Cloud NAS System

Ever since I had bought a Western Digital My Cloud System with 4 TB of Backup space, I had problems with it: after some hours, the system was unreachable over the network. Several firmware upgrades later the problem has aggravated, and the system was reachable only for 10 to 20 minutes after each power cycle.

In this  little blog, I summarize the measures I have taken to get it back to life.


A deeper analysis with Wireshark software has shown that the system is still up and running, but it was sending TCP RST back, when trying to connect to the Web Interface. Also backing up data was not possible anymore. Removing and re-attaching the power cable has helped, but no longer then 10 to 20 minutes. Then I have seen the same symptoms again.

Searching for a working solution via Google was in vain, so I thought, I have to throw away the system and buy something better. However, I have given it a last chance and I have done the following:

Step 1: Remove and re-attach the power cable

Step 2: Connect to the Web interface (is possible ~10 to 20 minutes after powering up).

Step 3: Enable SSH root access

Step 4: Connect via SSH, and change the default password to your individual password

Root Cause:

After 10 or 20 minutes, I have seen the same symptoms again: the web interface is sending Reset messages. However, the SSH connection was still up and running. This has helped me to figure out the problem.

top (hitting M will sort the entries by Memory) has shown, that the system was overwhelmed with respect to memory: before I have “repaired” my system, the total memory used was around 100% of the physical memory (only 256 MB!) and another 50-70% of the Swap (500 MB). The heavy swapping has caused the system being unresponsive. The hard disks were continuously working quite audible.

CPU was about Okay, so this was not the problem.

Output of top, sorted by Memory (after measures taken)
Output of top, sorted by Memory (after measures taken)

I could see that apache2 was running 10 times and each of those have occupied 8% of the memory, summing up to 80% of the available memory. As you can see above, switching off 7 of those 10 apache2 process were the key to success; a solution I have not found anywhere in the forums discussing the exact same unreachability symptoms with WD My Cloud. However, I have found this StackOverflow question asking, why there are so many apache processes running. The answer: “Check your httpd.conf for MinSpareServers, MaxSpareServers and ServerLimit”.

Still, it was not so easy to find the location, where those variables are defined on WD My Cloud. For WD My Cloud, it is not defined on /etc/apache2/apache2.conf as seems to be the case for most other Apache2 servers. A search has revealed the location:

WDMyCloud:~# grep -iR MaxSpareServers /etc 2>/dev/null
/etc/apache2/mods-available/mpm_prefork.conf:# MaxSpareServers: maximum number of server processeshich are kept spare
/etc/apache2/mods-available/mpm_prefork.conf:    MaxSpareServers       10
/etc/apache2/mods-enabled/mpm_prefork.conf:# MaxSpareServers: maximum number of server processes wch are kept spare
/etc/apache2/mods-enabled/mpm_prefork.conf:    MaxSpareServers       10

And here, we find a confirmation: on WD My Cloud, I need to edit the values on /etc/apache2/mods-available/mpm_prefork.conf

This is, where I have changed the value from 10 to 3:

Resolution of the Problem

Step 5: Reduce the number of Apache2 Processes

As root, edit the file /etc/apache2/mods-available/mpm_prefork.conf.

Before the change, I have found following relevant content in the file:

    StartServers          2
    MinSpareServers       2
    MaxSpareServers       10
    MaxRequestWorkers     10
    MaxConnectionsPerChild  10000

I have changed this to:

    StartServers          2
    MinSpareServers       2
    MaxSpareServers       3
    MaxRequestWorkers     3
    ServerLimit           3
    MaxConnectionsPerChild  10000

I have read somewhere, that the ServerLimit should have the same value as MaxSpareServers, so I have added it to the configuration (I have not tested without this entry, so it may be that it has no effect).

Step 6: restart apache

Restart Apache Server with

service apache2 restart

Check the memory consumption e.g. with top. It should have improved by now.

Step 7 (optional): Switch off additional services

You can switch off some of additional services you do not need for your WD My Cloud. In my case, I have chosedn a quick and dirty way of doing this by renaming following files:

mv /etc/init.d/twonky /etc/init.d/twonky.orig
mv /etc/init.d/wdmcserverd /etc/init.d/wdmcserverd.orig
mv /etc/init.d/wdnotifierd /etc/init.d/wdnotifierd.orig
mv /etc/init.d/wdphotodbmergerd /etc/init.d/wdphotodbmergerd.orig

However, this is not needed, I think, and furthermore, it leads to error messages on the Windows backup client that a backup destionation was not found. Anyway, I have kept it this way, because the backup works fine, even though this error message pops up from time to time…

Step 8 (needed only, if step 7 was performed before):




Since then, the system is working silently (no more audible disk swapping) and reliably. It is up and running for more than 11 days now; continuous backup as well as the Web Interface is working fine, the the web portal’s responsiveness is much higher than ever before.


I hope this will help someone out there as well.

P.S.: I have included this information also in this forum post. The other forum posts discussing similar issues (e.g. here, here and here) were closed, so I could not add the information there.


Docker HTTP Proxy and DNS Configuration Cheat Sheet (now includes automatic HTTP Proxy detection)

This blog post provides a little cheat sheet on running a Linux host with or without a Docker client behind HTTP proxies. For Ubuntu and similar Linux distributions, we will also show how to detect the proxy available and adapt the proxy settings accordingly. This comes handy, if you are a road warrior that often switches from a network with proxy and a network with no or a different proxy without restarting the Linux host (e.g. a notebook you only place in standby or hibernate mode). This is not tested for other Linux distributions, but might work there as well.


For setting the HTTP proxy settings of a Docker host, you can try to follow this official docker instructions on how to handle HTTP proxy configuration. As fas as I remember, those instructions have not worked on my Ubuntu Docker host (I cannot test it easily in the moment). This is, why I have dug deeper into that matter and why I have come up with this little cheat sheet.

The steps needed to connect a Docker client to Docker hub via a HTTP proxy seem to depend on the Linux distribution used. I have tested the HTTP configuration for Ubuntu and CoreOS. For most Debian-based Linux distributions, I guess, you can follow the same instructions as you find here for Ubuntu. In any case, check out the various answers and comments in this StackOverflow question, if the instructions below do not work. Feedback is highly welcome.


Manual Settings

First let us detect, whether sudo is needed:

sudo echo hello > /dev/null 2>&1 && SUDO=sudo

On Ubuntu (tested with 14.04 LTS), we need to configure the file /etc/default/docker.

$SUDO vi /etc/default/docker

add proxy, if needed

export http_proxy='http://proxy.example.com:8080'
export https_proxy='http://proxy.example.com:8080'

In case your HTTP proxy requires authentication, try with:

export http_proxy='http://yourname:yourpassword@proxy.example.com:8080'
export https_proxy='http://yourname:yourpassword@proxy.example.com:8080'

In any case, you need to exchange the values for yourname, yourpassword, the proxy FQDN or IP address (proxy.example.com) and the TCP port (8080) by the values that apply to your environment.


$SUDO restart docker

then you can test it with:

$SUDO docker search ubuntu

should work now. If the proxy configuration is not correct, you will encounter an error message that looks like similar to following line:

Get https://index.docker.io/v1/repositories/busybox/images: dial tcp: lookup index.docker.io on no answer from server

Automated Settings

If you are running a Linux host on your notebook (with or without Docker) and if you often change the network environment without rebooting the Linux machine, you might find it handy to set the proxy automagically just by typing p<Enter> . This can be done using my linux_set_proxy git repository like follows:

git clone https://github.com/oveits/linux_set_proxy
cd linux_set_proxy; sudo ./install.sh

Then each time the proxy has changed, you just type:

source proxy <proxy-name_or_IP> <proxy_port>

If the proxy is not available, the proxy settings are reset automatically.

Simplified Usage

Consider you are switching between network environments with no proxy, proxy1.example.com,, proxy3.example.com and the ports 81, 8080 and 81, respectively, you just need to define an alias like follows:

alias p=source proxy proxy1.example.com 81 || source proxy 8080 || source proxy proxy3.example.com 81

If you want this definition to persist, you can add this line to your ~/.bashrc file (if bash is your favorite Linux shell).

Next time, the proxy has changed, you just need to type


and the program will test, which proxy is available and

  1. adjust the settings of the HTTP_PROXY, HTTPS_PROXY environment variables, and
  2. change the Docker configuration file and restart Docker, making sure that any subsequent docker command are using the right HTTP proxy settings


Manual Settings

First let us detect, whether sudo is needed:

sudo echo hello > /dev/null 2>&1 && SUDO=sudo

For CoreOS, we can follow the official docker instructions:

$SUDO mkdir /etc/systemd/system/docker.service.d
$SUDO vi /etc/systemd/system/docker.service.d/http-proxy.conf

and add something like:


(replace the URL, so it fits to your environment)


$SUDO reboot

or if you do not want to reboot, try:

$SUDO sysctl docker restart

Automated Settings

Not supported yet.

Docker Host DNS Configuration for Vagrant

On a Vagrant based Ubuntu system, I had a problem resolving FQDNs to IP addresses. The solution was to add the following three lines into the Vagrantfile. The fix is described here:

# The fix is described on http://askubuntu.com/questions/238040/how-do-i-fix-name-service-for-vagrant-client:
config.vm.provider "virtualbox" do |v|
  v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]