I was trying to copy some big files over the network to another server, and I mapped the destination drives in the local server for easy copying. However I kept getting errors in my script, which I suspected because the drives getting disconnected. Here is how to set it to not disconnect.
Run as administrator in a command prompt, where -1 means disable.
net config server /autodisconnect:-1
We subscribed for a dedicated line between 2 datacenters, and when we were trying to copy some files over, it was really slow.
We were supposed to get few MB/s transfer rate, but were getting only 20KB/s which was unacceptable. We needed to make a clear case with the service provider to get their support on fixing this. Simple google searches showed me the tool called iperf, and it provided me what I wanted. It is a shame that I never knew this tool existed.
In the destination server, I ran the iperf server by below command:
.\iperf3.exe -s -p 136 (I only have few ports open in between, and that is not currently in use)
and in the source, I ran the iperf client:
.\iperf3.exe -c <Destination IP> -p 136
The results were enough to convince the service provider to fix their network.
This is probably the most basic test that can be done using the tool, but there are plenty of other options as documented here.
My friend has a server hosted in SoYouStart, and suddenly the server went down and nobody knew why. He asked my helped to bring it back online.
We tried to boot into recovery mode from the console, and we were able to login using the credentials sent by them. We were also able to download all the files as a backup. However once we boot normally, the server was still not reachable.
Suspecting its a bootloader issue, I tried to re install grub, and it worked. This is how I fixed after logging in in the rescue mode.
$ fdisk -l (to find the names of physical drives, something like “/dev/sdxy″ – where x is the drive and y is the root partition.Ours was a RAID setup, so /boot was on /dev/md1, / was on /dev/md2)
$ mount /dev/md2 /mnt (Mount the root partition)
$ mount –bind /dev /mnt/dev
$ mount –bind /proc /mnt/proc
$ mount –bind /sys /mnt/sys
$ chroot /mnt (This will change the root of executables to your your drive that won’t boot)
$ mount /dev/md1 /boot (Mount the boot partition. If /boot is not a separate partition, no need to do this step)
$ grub2-mkconfig -o /boot/grub2/grub.cfg
$ grub2-install /dev/sda (/dev/sda and /dev/sdb were the physical disks, not partitions used in the RAID setup. If it is not a RAID, the you should use the disk where /boot is installed)
$ grub2-install /dev/sdb
Ctrl+D (to exit out of chroot)
$ umount /mnt/dev
$ umount /mnt/proc
$ umount /mnt/sys
$ umount /boot
$ umount /mnt
We finally managed to bring up the server which was down for two weeks. Felt so proud of it.
Ever since the policy to disable TLS 1.0 was pushed down to the local machines, we started getting the error “an authentication error has occurred(code 0x80004005)” when accessing few of our Windows 2008 R2 servers. It was interesting because we have a bunch of other servers with no problems accessing. This seems to be a very generic error code as google results were showing multiple problems and multiple solutions for this.
Apparently, in my case, the patch to add RDS support for TLS 1.1 and TLS 1.2 was not installed in 3 of the servers with this problem. So I had to download the patch from this Microsoft website and install and reboot them remotely. Once installed and rebooted, voila!!
I was facing this error in one of my servers while trying to open gpedit, with additional message “The volume for a file has been externally altered so that the opened file is no longer valid”.
Here is how I fixed it.
1) Enable view hidden files from explorer.
2) Navigate to C:\Windows\System32\GroupPolicy\Machine
3)Rename the file Registry.pol to something else.
4) Run gpupdate
By now, I was able to open gpedit normally.
Note that by doing this, whatever polices in the local group policy settings will be gone. There will only be settings from the domain policy. So if you have made any local policy changes, they need to be re-done.
So we migrated our fileshare to a DFS namespace and we started facing a lot of problems. One of the most annoying one was, no matter how we trust the source, the powershell scripts from the DFS namspaces were not able to run, as they give warning.
Interestingly, problem was that the FQDN of the DFS namespace is considered as an internet location by windows, causing it to not trust the location. This can be fixed by editing the local group policy.
Group policy editor -Computer configuration- Administrative templates – Windows components -Internet Explorer – Internet Control Panel – Security Page – Site to Zone Assignment List.