Perforce Proxy – uses and notes

Here’s the lowdown on the Perforce Proxy (aka P4P):

– You can use *any* version of it. Just use whatever the most recent one is on

– It’s as simple as you might hope. Install it, make it run all the time, point it at your main perforce instance, and then have all your p4 clients just point at the proxy instead of the main perforce server.

– Yeah, having all the clients point at a p4 proxy instance instead of the main perforce server means you can lock down your perforce server better.

– The proxies cache all the big files and pass along all the commands from the client to the main perforce server. All the proxy does is make it so the main server doesn’t have to read and transfer the big files.

– If you run a p4 proxy on your own computer, it means your computer is storing all the versioned files locally, so when you switch between branches or streams, you don’t have to wait for the big files to transfer from the main perforce server. Yes, it’s great for VPN connections.

– You can’t point a p4 proxy at a p4 proxy 🙂 It says something about a version not being new enough, at least in my brief tests.

– P4 Proxy is easiest to install on Windows. Pretty no-brainer stuff. Really, the installer is an .msi that includes the p4 server and client. You can install just the proxy though, so no worries.

– P4 Proxy just uses disk space. CPU and memory is negligible.

– If you have people in all kinds of locations, just have them set up a p4 proxy instance in a relatively secure manner on their own computer (or if they’re fancy, in some VM in their own environment).

– If you go fancier that a P4 proxy, you’re looking at setting up some perforce edge and read-or-whatever replicas. Just don’t go there…. unless you’re HUGE and you have people (not “person”) in your dedicated IT infrastructure team.

Download it here:

How things get restored in Google Drive after a big delete

Imagine this:

– You have a Google Drive folder that’s receiving daily builds. This means the previous day’s build gets deleted.

– Then one day you delete the folder accidentally. You know Google Drive keeps backups of everything, so you just go in to your Google Drive trash, find the folder and then right-click > Restore.

Well, you get your stuff back, that’s for sure. All of it. All of the files that had ever been in that folder. Heh. And it takes a good several hours for them all to eventually show up, and they appear gradually over that time.

Now, how many people are syncing the folder? Hah. Full hard disks a-plenty. You kind of have to just sit there refreshing the web view of the folder, deleting all the old files.

Really, I’d recommend creating a new folder and selectively restoring the specific files you need. That should be a much faster solution. Now if you have apps or something pointed at the URL of the actual folder, then, well, you’ve got a long day ahead of you. Just keep deleting the files as they show up. It’ll work.

Robocopy can’t find the path when run via TeamCity Agent

You have a network share mapped as a drive and you want your TeamCity Agent to copy files to it when it’s done building. But it doesn’t work. Your users are fine (run “whoami” in TeamCity script and look at the output to make sure).

In the TeamCity build logs you’ll see something like this:
ERROR 3 (0x00000003) Getting File System Type of Destination

The annoying part is that it works just fine when you run it via the command prompt.

In short, the issue is likely that when Robocopy doesn’t use the interactive user session, so even if you’re logged in with the user under which the TeamCityAgent service is running, and you have the network drive mapped just fine, Robocopy won’t be able to see it. So, you could put a “net use” command just before your robocopy command, but then you’ll have to remove it again afterwards, which would be prone to failure… though maybe remove-then-add just before? Anyways, just point it directly at the network share for your robocopy command. That way, you only have to make sure your TeamCityAgent user has permissions on the network share.

l2tp support in Ubuntu 16

Here’s the best step-by-step:

Here’s another step-by-step that had some mismatched text strings that kind of wrecked stuff:

Meraki doesn’t have much in the way of documentation on setting up the client VPN on Linux servers. They have something for a Linux distro running a GUI:

Here is something about gettig the l2tp vpn client to work in a clean way on a Linux GUI. Again, not applicable for pure Linux servers though:

Here’s my step-by-step that works on a fresh Ubuntu 16 install and pointed at a Cisco Meraki MX64. It includes a means of keeping the connection alive by using the monit utility:

Install the packages we will need
You’ll have to sudo up to root to install all this stuff

apt-get update
apt-get install -y strongswan xl2tpd

Configure strongswan
Note: this “cat >…..” method replaces the file with the contents that follow. You can kind of script the whole config part that way

cat > /etc/ipsec.conf <

Configure the preshared key
Note: you could just edit the file, so we don’t have the shared key sitting in a Google doc history
In that case, just add a line in/etc/ipsec.secrets that says:
nano /etc/ipsec.secrets
: PSK “pskgoeshere”
You may have to redo the double-quotes… google docs tries to be helpful
And, yes, that colon at the beginning of the line is necessary

cat > /etc/ipsec.secrets < /etc/xl2tpd/xl2tpd.conf <
ppp debug = yes
pppoptfile = /etc/ppp/options.l2tpd.client
length bit = yes

cat > /etc/ppp/options.l2tpd.client < ” > /var/run/xl2tpd/l2tp-control
Redo the double-quotes. Google docs screws these up bad

When this works you should see a new interface when you look at the local routing table
route -n
The first one resolves DNS; the second doesn’t
You should see something like this:

Notice the middle one, and on the right-most column is has “ppp0″…. That’s what you want.
If you don’t see it, wait 5-10 seconds and check again, then rerun that above “echo…..” command again.

When this works, you should see another connection in ifconfig
You should see something like this:

Notice the “ppp0” connection. It’s not there until after you run that above “echo…..” command. This is the actual VPN connection

Set up a route so we actually use the VPN interface
A few steps here, and I’ll show things with what I actually saw
The goal is to set a route that sends all traffic destined for addresses through the VPN connection.

Get the IP address of the local VPN interface
You need to get the IP address listed right after “P-t-P” in the “ppp0” interface

Add the route
route add -net gw
This makes the OS send all network traffic destined for any IP that starts with “10.x.x.x” through the interface, which means it goes through the VPN tunnel and to the Meraki VPN server, which then routes it as needed.

Disconnect the VPN
echo “d meraki-vpn” > /var/run/xl2tpd/l2tp-control
ipsec down meraki-vpn

Connect the VPN
ipsec up meraki-vpn
echo “c meraki-vpn ” > /var/run/xl2tpd/l2tp-control

Making sure the VPN connection stays active
Latest: monit seems to be a viable means of making sure the vpn connection is stable.

Install monit
apt install monit

Create a config file aimed at monitoring our VPN connection
nano /etc/monit/conf.d/monitor_vpn
In there use the following:
check network ppp0 with interface ppp0
start program = “/bin/bash -c ‘/p4proxy_bh/'”
stop program = “/bin/bash -c ‘/p4proxy_bh/'”
if failed link then restart

Create the referenced scripts
nano /p4proxy_bh/
ipsec up meraki-vpn
sleep 5
echo “c meraki-vpn ” > /var/run/xl2tpd/l2tp-control
sleep 5
route add -net gw

nano /p4proxy_bh/
echo “d meraki-vpn” > /var/run/xl2tpd/l2tp-control
sleep 5
ipsec down meraki-vpn
sleep 5
route delete -net gw

What this does:
The first “ppp0” in that monit config file is actually the name of the monitor…you could name it anything (used with “monit start ppp0” to manually run the monitor)
It’s saying if there is no interface by the name of “ppp0”, as would be the case when the VPN connection is down, then “restart”… in monit terms, “restart” = run the “stop program” and then the “start program” specs. It then also sets the route, just in case.

Monit wakes up and runs all the configured monitors every 2 minutes

monit status
Shows the last result of each monitor
This worked to successfully and easily detect that the VPN tunnel interface was down and automatically restart it.

Biggest Fail
All the “meraki-vpn” strings refer to each other. Some guides had inconsistent string names. The “conn” name and the “[Lac]” had to use the same string, in this case “meraki-vpn”.
Google Docs changes double quotes to fancy italicized ones, and when copied and pasted into a Linux terminal, they are technically *not* double-quotes, so your commands fail in all kinds of fun and interesting ways. Disable it:
There is a package you can now install that makes l2tp stuff much easier on Ubuntu desktop-with-a-GUI, but that won’t really work on command-line-only servers.

Here’s a message seen in “journalctl -xe” when nothing happens when you try to start the l2tp part (the “echo c….” command):
Aug 23 09:42:09 vpntest charon[1296]: 02[NET] sending packet: from[4500] to xxxxxxxxxx[4500] (60 bytes)
Aug 23 09:42:14 vpntest xl2tpd[1702]: Maximum retries exceeded for tunnel 3074. Closing.
Aug 23 09:42:14 vpntest xl2tpd[1702]: Connection 0 closed to xxxxxxxxx, port 1701 (Timeout)
Aug 23 09:42:19 vpntest xl2tpd[1702]: Unable to deliver closing message for tunnel 3074. Destroying anyway.
Aug 23 09:42:38 vpntest charon[1296]: 15[IKE] sending keep alive to xxxxxxxxxxx[4500]

A reboot of the VPN server on the MX64 resolved this.

Helpful debugging commands to use on the client
journalctl -xe
“Charon” messages are from ipsec (strongswan)
“Xl2tpd” messages are from xl2tp

/usr/sbin/xl2tpd -D
Can show xl2tpd-specific things

Background a long-running Linux task

Background a shell command that’s taking forever to complete, and you don’t want to open up a new ssh session to the host:


Pretty useful, especially if you’re manually running something like a backup process that takes a long time that, if you were to just close the ssh session, would just stop in some unknown state of incompletitude.

VmWare the operation is not allowed in the current state

I had tried to put a host into maintenance mode so I could reboot it. But it seemed to get hung up, but I didn’t see any outstanding tasks, so I just rebooted it. After it came back up, things seemed alright, and I started up the various VMs on it. Later, I started a migration task to change the storage of one of the VMs on that host. At the very end it said “the operation is not allowed in the current state” and failed. It happened again when I tried to deploy an ovf template to that host.

This post from VmWare gave a number of things to try. For me, it was disconnecting the host from vCenter and then reconnecting it (all right-click operations, so it was easy and quick). When it reconnected, the host was shown to be in maintenance mode, which was not the case before disconnecting and reconnecting it. Things seem happy now.

Running a different Perforce version with p4dctl than the latest (16.1 or whatever)

If you want to run Perforce with p4dctl since it’s so handy, but you only have a license for some older version of Perforce, here’s how you can have p4dctl point at that version:

Put the p4d binary into /opt/perforce/sbin/ and name it something descriptive for its version, like “p4d.2012.1”

Change the symbolic link in /etc/alternatives/helix-p4d to point at that instead of the p4d version that installed:

rm /etc/alternatives/helix-p4d
ln -s /opt/perforce/sbin/p4d.2012.1 /etc/alternatives/helix-p4d

Done. You’ve just swapped out the version of p4d that p4dctl uses. But, this is global for the entire machine, so it’s something to consider. I think it’ll auto-upgrade an existing p4 instance, but it can’t go backwards to lder versions, just to mention that. You’ll get weird errors.

Easiest Way to Run Perforce

Install Perforce according to this:

Then run the configure script mentioned in what comes out after the installer ran.

That script essentially just sets up a .conf file in /etc/perforce/p4dctl.conf.d/

You can edit that thing directly or add another one if you want to have another perforce instance running on the same machine.

Note: you cannot use p4dctl with replication, as best as I can tell – it clobbers any “p4 configure” things that may have been set in the master-configured stuff ro the replica.

Start and Stop all perforce instanced mentioned in /etc/perforce/p4dctl.conf.d/:
p4dctl start -a

p4dctl stop -a

Description of Perforce Replication

Here’s a super-simple description of how Perforce replication works:

You replicate from a master p4 instance to one or more replica instances. Each replica instance is just a p4d process running a few “p4 pull” commands pointed at the master via some configs. Those configs are set on the master by saying “replica instances named ‘Replica1’ should run with these configs”. You can have several replica instances on several different hosts all using the same perforce user. They alll just needed to have replayed a checkpoint file and synced over all the versioned files from the master, and they all just needed to be started with a command line arg that said it is named “Replica1” (or whatever name you set up with configs on the master).

Some notes:

Perforce users used for replication need to have ‘super’ permissions (set in p4 protect on the master). They also need to have their ticket timeouts set to ‘unlimited’ or to something super-long, so you don’t have to log in with the user to renew the ticket.

You can do *something* to make a replica behave like a read-only replica that auto-forwards any write operations to the master and waits to respond to the client until after those changes have replicated through to it.

I *believe* you could replay a checkpoint and copy an incomplete set of the versioned files to the replica and things would still work; you would then run ‘p4 verify -t’ on the replica to kick off a giant comparison of every file and have the replica pull down replacement files from the master when it detected an inconsistency or in the case of missing files. So, if I understand it correctly, you could technically just set up a replica with only a checkpoint from the master, and then if you ran ‘p4 verify -t’ the replica would eventually sync up.

Setting up Perforce Replication

This was oddly tricky. There’s not a ton of documentation on the internet about it, at least not that my Google searching turned up. Loads of official Perforce docs, yes, but I kept having problems with those step-by-steps.

Not to fear though. Below is a step-by-step that worked for me. I only tested it on the latest version though, but it seems older versions (after 2012 or so) are the same.

Set up Perforce replication

Master here is at
Replica host is at
Replication service user is named repl_user_for_Replica3
Assumes your perforce installs are in the same directory (/p4root in this case)
Assumes you installed perforce to run under the “perforce” OS user (default from apt-get)
Assumes both your perforce instances use the same case sensitivity and unicode settings

## Set up the perforce repo and update apt

## Do a basic install on the master
apt-get install helix-p4d -y

## On the master, do a basic configuration, setting the p4 root to /p4root
## Don’t use ssl:1666 just type in 1666 for that step

## On the master, set up a user for the replica to use
p4 -p -u super user -f repl_user_for_Replica3
## Add a line at the bottom with “Type: service”

## On the master, set a password for the service account
p4 -p -u super passwd repl_user_for_Replica3

## On the master, set up a group for the service accounts and make login tickets effectively never expire
p4 -p -u super group service_group
## Users: repl_user_for_Replica3
## Timeout: unlimited
## You can add more users by just having one per line
## You should see “Group service_group created.”

## Add super permissions to the service_group group – sounds alarming, but it seems to be required
## super group service_group * /…
p4 -p -u super protect

## Alright, the idea here is to set all the important replication things on the master rather than on the replica
## This way, for the replica we only have to worry about restoring the checkpoint, copying the versioned files, and naming the replica
## This approach will result in the replica automatically pulling everything it needs after merely starting it up
## I believe you could then start up any other p4 instance and merely point it at the master
## Based on its name it will then have all the replciation configs already set up for it to use

## On the master, set the master for the replica, the p4 instance of which we will call “Replica3”
## also set a lot of other variables the replica will pick up and use
p4 -p -u super configure set Replica3#P4TARGET=
p4 -p -u super configure set Replica3#P4LOG=replica3Log.txt
p4 -p -u super configure set Replica3#server=3
p4 -p -u super configure set Replica3#monitor=1
p4 -p -u super configure set “Replica3#startup.1=pull -i 1”
p4 -p -u super configure set “Replica3#startup.2=pull -u -i 1”
p4 -p -u super configure set “Replica3#startup.3=pull -u -i 1”
p4 -p -u super configure set Replica3#db.replication=readonly
p4 -p -u super configure set Replica3#lbr.replication=readonly
p4 -p -u super configure set Replica3#serviceUser=repl_user_for_Replica3

## On the master, checkpoint the master
p4 -p -u super admin checkpoint

## Now, copy the checkpoint.3 (or whatever the biggest number is) and then copy that to the replica’s p4root folder
## Yes, it’s likely a 2-step process, where you scp to /tmp, then ssh into the replica host and move from /tmp to /p4root
## Yes, you will have to mkdir /p4root on the replica
scp /p4root/checkpoint.3 ts@

## Copy all the versioned files from the master to the replica’s p4root folder (same location as on the master, relative to p4root)
## Yes, it’s likely a 2-step process, where you scp to /tmp, then ssh into the replica host and move from /tmp to /p4root
## Yes, you will have to mkdir /p4root on the replica
scp -r /p4root/streamsDepot ts@
## I’m using “streamsDepot” because that was the default folder it created. You may have other folders there; each represents a depot

## On the replica install p4
apt-get install helix-p4d -y

## One the replica, create /p4root
mkdir /p4root

## Move or copy the checkpoint we copied earlier
cp /tmp/checkpoint.1 /p4root

## Move or copy the versioned files we copied earlier
cp -r /tmp/streamsDepot /p4root

## On the replica, make sure the perforce user can act on the files you copied over
chown -R perforce:perforce /p4root

## Change users to the perforce OS user to start up the p4d process
su perforce

## On the replica host restore the checkpoint
p4d -r /p4root -jr /p4root/checkpoint.3
## You should see “Recovering from /p4root/checkpoint.3…”
## You should then see a bunch of files that start with “db.” on /p4root
## After you’re all done, btw, you can remove these checkpoint.3 files

## On the replica host log in to the master p4 instance with the service account so you get a forever-ticket, as we had configured on the master
p4 -u repl_user_for_Replica3 -p login

## You should see a .p4tickets file in the perforce user’s home directory when you do “ls -a” after this.
## If not, check the replica3Log.txt file for indicators of what’s wrong.

## Start the replica, naming it Replica3 and having it listen on port 22222
p4d -r /p4root -In Replica3 -p -d
## Naming it “Replica3” means it’ll pull up the configs you set on the master for “Replica3”

## You should connect to the master, submit some changes and confirm you see those in the replica
## In my case, I just copied and pasted and submitted some random text files in the streamsDepot and verified the versioned files were replicated over

## Check to make sure the replication processes are running on the replica
p4 -p -u super monitor show -a

## I *think* you can just have a cron job running that tries to start up p4d as though you were just starting the replica
## (sledgehammer method of making sure it’s always running… hah)
p4d -r /p4root -In Replica3 -p -d

## Potential improvements:
– have .p4tickets file stored in /p4root instead of the perforce user’s home directory (maybe better?)
– See if there’s any way the replication service group could use something less than “super” privileges
– Monitoring to detect if and when replication goes down or otherwise stops
– Cron job to make sure the replica p4d process is auto-restarted if it stops

## Links

The replica should then start auto-syncing everyting
In replica3Log.txt, it should be very clean:

Perforce server info:
Perforce Server starting 2017/04/11 01:54:54 pid 12986 P4D/LINUX26X86_64/2016.2/1498579.
Perforce server info:
2017/04/11 01:54:54 pid 12987 repl_user_for_Replica2@unknown background [p4d/2016.2/LINUX26X86_64/1498579] ‘pull -i 1’
Perforce server info:
2017/04/11 01:54:54 pid 12988 repl_user_for_Replica2@unknown background [p4d/2016.2/LINUX26X86_64/1498579] ‘pull -u -i 1’
Perforce server info:
2017/04/11 01:54:54 pid 12989 repl_user_for_Replica2@unknown background [p4d/2016.2/LINUX26X86_64/1498579] ‘pull -u -i 1’

The following message in replica3Log.txt means your repl_user_for_Replica2 doesn’t have super permissions
check out the “protect” step earlier; the replication user group needs to have super permissions on everything

Perforce server error:
2017/04/08 18:01:00 pid 1699 service@ background ‘pull -i 1’
Startup command failed: client-Message
2017/04/08 18:01:00 pid 1699 service@ background ‘pull -i 1’
Replica access refused. Ensure that the serverid and service user are correctly configured on the replica. Server Replica1 may not be used by service user service.
Server ‘Replica1’ doesn’t exist.

If you started up p4d on the replica with the wrong IP the replica3Log.txt file will say as much