Author Archives: Lee Winder

Accu 2014 Conference Notes

I had the chance to go to ACCU 2014 the other week (full conference schedule is here) and I have to say it was one of the best conferences I’ve had the pleasure to attend. And while it did confirm my idea that C++ is getting the point of saturation and ridiculous excess (C++11 was needed, as a result so was C++14, but C++17… Just use a more suitable language if these are the features you need), the talks I went to, on the whole, we’re great.

So I thought I may as well post up the notes I made from each talk – and while they might be a bit of a brain dump of the time, if there’s something that sparks your interest, I’m sure the full presentations will be posted up at some point soon.

 

Get Archaeology
Charles Bailey, Bloomberg LP

Interesting talk looking at some of the more esoteric ways of presenting and searching for information within an existing Git repository. Unfortunately, time was short so the speaker had to rush through the last part, which was, for me, the most relevant and interesting part.

Talk notes

 

Designing C++ Headers Best Practices
Alan Griffiths

For most experienced C++ developers the content of this talk is probably quite basic, as you’d have a large portion of this covered already through sheer trial and error over the years. But clarifying some good practices and interesting side effects was enjoyable.

Talk notes

 

Version Control – Patterns and Practices
Chris Oldwood, chrisoldwood.blogspot.com

Highlevel overview of the various patterns we use when using version control (focused mostly on DVC), some useful examples and some interesting discussion about trust issues…

Talk notes

 

Performance Choices
Dietmar Kuhl, Bloomberg LP

Proof that if you want to know about optimisation and performance, stick to asking people in the games industry.

I’m not posting these notes.

 

Crafting More Effective Technical Presentation
Dirk Haun, www.themobilepresenter.com

Really good presentation on how to craft good presentations – some interesting discussions on the make up of the human brain, why certain techniques work and why the vast majority of technical talks (or just talks in general to be honest) do what they do.

Talk notes

 

The Evolution of Good Code
Arjan van Leeuwen, Opera Software

Great talk, not telling us what good code is, but examining a few in-vougue books over the last decade to see where they sit on various contentious topics. Note that when the notes say “no-one argued for/against” it’s just referencing the books being discussed!

Talk notes

 

Software Quality Dashboard for Agile Teams
Alexander Bogush

Brilliant talk about the various metrics Alexander uses to measure the quality of their code base. If you’re sick of agile, don’t let the title put you off, this is valuable reading regardless of your development methodology.

Talk notes

 

Automated Test Hell (or There and Back Again)
Wojciech Seiga, JIRA product manager

Another great talk, this time discussing how the JIRA team took a (very) legacy project with a (very) large and fragile test structure into something much more suitable to quick iteration and testing. When someone says some of their tests took 8 days to complete, you have to wonder how they didn’t just throw it all away!

Talk notes

 

Why Agile Doesn’t Scale – and what you can do about it
Dan North, http://dannorth.net

Interesting talk, arguing that Agile is simply not designed to, nor was it ever imaged to, be a scalable development methodology (where scale is defined as bigger teams and wider problems). Excellently covered why and how agile adoption fails and how this can be avoided to the point where agile principles can be used on much larger and less flexible teams.

Talk notes

 

Biggest Mistakes in C++11
Nicolai M Josuttis, IT-communications.com

Entertaining talk where Nicolai, a member of the C++ Standard Committee library working group, covers various features of C++11 that simply didn’t work or shouldn’t have been included in the standard. When it gets to the point that Standard Committee members in the audience are arguing about how something should or does work, you know they’ve taken it to far.

Talk notes

 

Everything You Ever Wanted to Know About Move Semantics (and then some…)
Howard Hinnant, Ripple Labs

Detailed talk on move semantics which are never as complicated as they (and even the speaker was) are made out to seem. There’s some dumb issues, which seem to be increasing as the C++ standards committee don’t seem to understand the scale of the changes being made, but never-the-less it was a good overview of what is a key improvement to C++.

Talk notes

Git Off My Lawn – Large and Unmergable Assets

I posted up the Git talk myself and Andrew Fray did at Develop 2013 and mentioned I’d have a few follow up posts going into more detail where I thought it was probably needed (since you often can’t get much from a slide deck and no-one recorded the talk).

One of the most asked questions was how we handled large and (usually) unmergable files (mostly in regards to art assets but it could be other things like Excel spreadsheets for localisation etc.). This was hinted to on slides 35 – 37 though such a large topic needs more than 3 slides to do it justice!

To start talking about this, it’s worth raising one of Git’s (or indeed any DVCS’s) major drawbacks and that’s how it stores assets that cannot be merged. Instead of storing history as a collection of deltas (as it does with mergable files) Git simply stores every version as a whole file, which if you have a 100 versions of a 100MB file, it means your repository could be 10GB in size just for that file alone (it’s not that clear-cut, but it explains the issue clearly enough).

While this is a drawback of DVCS in general it’s not necessarily a bad thing.

It’s how all SCM systems handle files that can’t be merged (some SCMS’s do the same with mergable files too – imagine how large their repositories are) but the problem comes with Git’s requirement that a clone pulls down the whole repository and it’s history rather than just the most recent version of all files. Suddenly you have massive repositories on everyones local drive and pulling that across any connection can be a killer.

As an example, the following image shows how a single server/client relationship might work, where each client pulls down the most recent file, while the history of that file is stored on the server alone.

But in a DVCS, the whole repository is cloned on all clients, resulting in a lot of data being transferred and stored on every clone and pull.

Looking at Sonic Dash, we have some pretty large files (though no-where near as large as they could be) most of them PSDs though we have smaller files like Excel spreadsheets that we use for localisation. Since none of these files are mergable and most of these files are large, we couldn’t store them in Git without drastically altering our workflow. So we needed a process that allowed them to be part of the general development flow but without bringing with them all the problems mentioned above.

Looking at the tools available, and looking at what tools were in use at the moment, it made sense to use Perforce as an intermediary. This would allow us to version our larger files without destroying the size of our Git repository but it did bring up some interesting workflow questions

  • How do we use the files in Perforce without over-complicating our one-button build process?
  • With Git, we could have dozens of active branches, how do they map to single versioned assets in Perforce?
  • How do we deal with conflicts if multiple Git repositories need different versions of the Perforce files?

 

By solving the the first point we start to solve the following points. We made it a rule that only the Git repository is required to build the game i.e if you want to get latest, develop and build you only need to use the Git repository, P4 is never a requirement. As a result of this the majority of the team never even opened P4V during development.

This means the P4 repository is designed to only hold source assets, and we have a structured or automated process that converts assets from the P4 repository into the relevant asset in the Git repository.

As an example, we store our localisation files as binary Excel files as thats whats expected by our localisation team but as that’s not a mergable format so we store it in P4. We could write (or probably buy) an Excel parser for Unity but again that wouldn’t help since we’d constantly run into merge conflicts when combining branches. So, we have a one-click script (written in Ruby if you’re interested) that converts the Excel sheet into per-platform, per-language XML files that are stored in the Git repository.

These files are much more Git friendly since the exported XML is deterministic and easily merged. Any complex conflicts and the conflict resolver can resolve how they see fit and just re-export the Excel file to get the latest version.

It also means that should a branch need a modified version of the converted asset they can either do it within the Git repository or roll back to a version they want in P4 and export the file again. The version in P4 is always classed as the master version, so any conflicts when combining branches can be resolved by exporting the file from P4 again to make sure you’re up to date.

Along with this we do have some additional branch requirements that help assets that might not be in Perforce (such as generating Texture Atlases from source textures) but that’s another topic I won’t go into yet.

GLFW – Compiling Source and Creating an Xcode Project

GLFW is a great library – easy to use, great documentation covering it’s API design. But if you’re not on Windows where they provide pre-built libraries, going from a source download to a compiling Xcode project is _not_easy.

Since brew (as of writing) only provides GLFW 2, you need to build GLFW 3 yourself.

There is barely any information on it at all, it assumes you know exactly what you’ve downloaded (the source and some CMake files btw) and that you’ll be able to figure it all out (there is some additional information if you root around the GitHub forums but it’s not exactly easy to find).

So I’m posting up how I got GLFW to compile and running in Xcode here in-case anyone else starts banging their head against a brick wall.

First off, download the source followed by Cmake (I downloaded the 64/32-bit .dmg for 2.8.11.2).

Now to get it to compile you need to set up a few options. You can do this in the Cmake GUI, but I found it much easier to do it manually. Browse to GLFW/Src and open config.h.in.

This is a settings file for the compile you’re about to perform. There are a lot of settings in here you can play around with (they are explained in the README.md file in the root GLFW folder), but I enabled the following

  • _GLFW_COCOA
  • _GLFW_NSGL
  • _GLFW_NO_DLOAD_WINMM (this is a Windows only define, but I enabled it so adding it here anyway)
  • _GLFW_USE_OPENGL

 

Save the file and then you’re ready to compile.

Open the Terminal and browse to the root of the GLFW folder and enter the following commands

cmake .
make install

The Cmake command will set up the project ready to be built. It’ll take the config file we modified earlier and create a valid config.h file and it’ll carry out any other set up needed as well.  Calling make builds the project, the install command installs the library in it’s default location.

Now you’re ready to add GLFW to your Xcode project. To keep it simple I’ll step through adding it to a new project, once you can do that adding it to an existing one is pretty easy.

  • Open Xcode and create a new OSX Command Line Tool project
  • In the project settings add the path to the GLFW include file in the ‘Header Search Paths’ and the path to the libglfw3.a lib to the ‘Library Search Paths’ option (both of these paths are shown in the install make output)
  • Add libglfw3.a to the Link build phase
  • You also need to add the following Frameworks to the build phase to allow the project to compile
    • IOKit
    • Cocoa
    • OpenGL

 

You can now copy/paste the sample code provided on the GLFW website into the existing main.c file, hit compile and you have a running GLFW project.

Note that the sample code doesn’t call glclear before glfwSwapBuffers so what you get in the open window is undefined.

Git Off My Lawn – Develop 2013

Recently myself and Andrew Fray (@tenpn) presented “Git Off My Lawn” at Develop 2013 in Brighton. The talk was designed to discuss how we took a Perforce centric development process and moved over to one centred around Git and a Git based branching work flow.

As with most presentations the slides tell half (or less) of the story, so while I’m posting up the slides here I’ll spend the next couple of weeks (or more likely months) expanding on a particular part in more detail.

In the mean time if you have any questions, just fire them at me.

Git Fetch –prune and Branch Name Case

I posted up the following to the git community mailing list the other day

When using git fetch --prune, git will remove any branches from
remotes/origin/ that have inconsistent case in folder names.

This issue has been verified in versions 1.7.10.2, 1.7.11.1 and 1.8.3.2.

I've described the reproduction steps here as I carried them out, and
listed the plaforms I used to replicate it.  The issue will most
likely occur on a different combination of platforms also.

- On Mac, create a new repository and push a master branch to a central server
- On Mac, create a branch called feature/lower_case_branch and push
this to the central server (note that 'feature' is all lower case)
- On Windows, clone the repository but stay on master, do not checkout
the feature/lower_case_branch branch
- On Windows, branch from master a branch called
Feature/upper_case_branch (note the uppercase F) and push to the
server
- On Mac, run git fetch and see that
remote/origin/Feature/upper_case_branch is updated

Couple of things to note here
1) In the git fetch output it lists the branch with an upper case 'F'
  * [new branch]      Feature/upper_case_branch ->
origin/Feature/upper_case_branch
2) When I run git branch --all it is actually listed with a lower case 'f'
  remotes/origin/feature/upper_case_branch

Now the problem happens when I run git fetch --prune, I get the following output
  * [new branch]      Feature/upper_case_branch ->
origin/Feature/upper_case_branch
  x [deleted]         (none)     -> origin/feature/upper_case_branch

Note the new branch uses 'F' and the deleted branch uses 'f'.

The results of this bug seem to be
* Everytime I call git fetch it thinks Feature/upper_case_branch is a
new branch (if I call 'git branch' multiple times I always get the
[new branch] output)
* Whenever I run with --prune, git will *always* remove the branch
with a different folder name (from a case sensitive perspective) than
the one originally created on the current machine.

I’ve yet to receive a response as to whether this is an actual bug (certainly looks like it) or expected behaviour, but it caused quite a bit of running around trying to find a solution to it (I originally thought it was a SourceTree bug).  Since our branches are extremely transient, we use –prune a lot so not being able to use it would have caused quite a few issues.

Luckily it can be worked around by calling ‘git fetch –prune’ followed directly with ‘git fetch’ and depending on what tool you’re using, adding this as a custom step is usually pretty easy.

Here’s the link to the list thread if you want to follow it.

Installing P4 on Mac OS…

…because for some reason installing P4V doesn’t install the command line tool (and I’m tired of having to Google it).

  • Download the command line client from Perforce – http://www.perforce.com/downloads
  • Copy the downloaded file to /usr/local/bin/
  • Make the file executable by running the following in Terminal

 

cd /usr/local/bin/
chmod +x p4

 

Why Perforce can’t provide a P4 install is beyond me…

Negative Developers and Team Stability

It doesn’t take much for negative feelings to start to seep into a team but it takes a lot more to turn a team around and start to raise moral and motivation. The following isn’t based on an in-depth study of development teams across the world but on my own personal experience of managing and observing a number of teams over the last 10 years.

Take of that what you will…

When you look at the make up of a team it will always be staffed by people who raise the game and by some who only bring it down. It’s the nature of taking a group of individual people and asking them to work together for a period of time towards a common goal. It’s the individuality of these people that can take a project and make it fly or cause it to crash and burn.

One thing that’s clear is that it’s much easier for a single individual to bring a team down than it is for an individual to improve the team in any significant way. Negativity will spread like wild fire through a team whilst positivity acts more like treacle and can be much harder to spread around.

But why?

A negative attitude to work is a whole lot easier. Doing less, talking badly about the team or rubbishing the game is much easier than creating excellent content, taking responsibility for your work or stepping outside your defined role and doing something great.

 

What Defines a Negative Developer?

There are many ways in which a developer might have a negative effect on a team. The most obvious is through their general attitude to their current project, be that general low level complaining, pushing back against work requests for no good reason or general slacking off during the day.

It could be a lack of skill development or even a backsliding in the quality of the work they are producing.

But it could also be an attitude that doesn’t gel with the general ethos the team is aiming for. Maybe you want your developers to take more responsibility for their work and how it’s designed and implemented and one or two developers will only work when they are told exactly what they need to do.

Maybe it’s a developer who doesn’t get involved with the daily meetings, mumbling through and obviously not interested in what other people are doing.

At the end of the day, identifying a developer generating a negative effect on a team is usually pretty easy. They’re the ones who are difficult to deal with in usually many aspects of the development process…

 

Team Development

Lets have a look at a few situations, where a green developer is a ‘positive’ developer, red a ‘negative’ one.

In the first situation we have two developers working side by side, one working well and another not doing so great. Maybe one of them has a bad attitude, maybe they don’t want to really push what they are doing. Either way, their contribution to the team is much less than that of the positive developer.

In most cases, this will go only one way. The good developer, seeing their partner being allowed to get away with not working so hard, not having to put in as much effort will eventually start to slow down and equalise with the poorer developer.

It’s much less likely that the poorer developer who is getting away with poor work or a bad attitude will see the better developer and decide to put in that extra work. As a result, you now have two bad developers rather than one.

When does it go the other way? When does the poor developer look around and start to raise their game? The answer isn’t very encouraging.

Take the following situation

Theres a tight balance here, but since it’s much easier for a developer to reduce the quality of their work rather than improve it, it’s easier to slide the wrong way and at that point its’ very easy to see where this will go.

Based on a number of observations it seems at though while a 3:1 ratio might get you some good results it still brings risks because should one developer start to slip it then becomes 1:1 which puts us right back at the start.

In most cases you can only really guarantee that other people will not slip if you have a 4+:1 ratio between positive and negative developers. In a number of cases the negative developer didn’t change their attitude without help but other developers didn’t slip due to the peer pressure of the other better developers.

 

Positive Developers

But in all these situations I’m not giving these positive developers enough credit. A good developer won’t always slack, they’ll continue working hard, producing great content and generally continue to fly high.

But take the following situation…

These developers are good for a reason, be that personal pride, ambition or sheer enjoyment of the work they are doing. And if a good developer finds themselves in the minority for a long period of time, the outcome is inevitable.

Great developers won’t stick around if those around them are not working to their potential or failing to create an environment in which the better developers feel themselves being pushed. And once your great developers leave you have a much higher chance of those left realising they don’t need to actually work that hard to get through the day.

Solving the Problem

There are two ways to deal with poor developers on a team. The first is the most drastic, but initially not an option if you’re working in a region with sane labour laws.

Just drop them.

To be honest I wouldn’t recommend this anyway.  Simply letting someone go generally removes the problem but it can leave a lot of holes on the team and you hired this person for a reason, why not try and get that spark back?

Performance Management structures (you do have a performance management process don’t you?) within an organisation can, if done correctly, not only resolve the problem but allow the poor developer to raise their game and become a star on the team.

Identify the source of the problem.  Does the developer just not like the game, are they having a difficult time outside of work, do they disagree with how work is being allocated or do they just not want to be there?  Depending on what their answers are, you’ll have a good idea of where to go next.

Make sure goals are set. Define goals designed to turn the situation around but don’t just set and forget about them (which happens far to often).  Monitor them on a weekly or bi-weekly basis, setting very short term goals to complement the longer term ones.

Define a fixed period of time.  Don’t just let things drag on with only small improvements here or there, have a deadline at which point things will get more serious.

Make it clear what the end results will be.  Whether they are the chance to work on something different or whether it’s a termination of the contract, make it clear so everyone knows what will happen when the goals are reached or missed.

Keep constant records.  Make sure every meeting is documented and the progress or results of all the goals are recorded daily.

Let them go.  While it is drastic, if improvements are not being made given all the opportunities you’ve given them then there really is no other option.  If you’ve bent over backwards to try and solve the problem and the developer hasn’t taken you up on the offer then there really is nowhere else to go.

And even with those sane labour laws, the documentation you’ve been keeping over the Performance Management period mean you can release the developer from their contract knowing you tried your best and they didn’t want the help.

 

So negative developers, whatever is defined as negative based on the goals of your team, are almost guaranteed to have a bad effect on a group developers.  Negative attitudes to work and development can spread much faster than you might think and will either cause people on your team to normalise at a level far below where they need to be or will simply leave.

It’s vital that as a group these developers are tackled fast, rather than when their effects start to be felt.

Ruby, Jenkins and Mac OS


I’ve been using Jenkins as my CI server for a while and though user permissions has always been a bit of an issue (I’ll explain why in another blog post covering my Unity build process) running command line tools has never been to much of a problem once it got going.

At least not until I tried to run Ruby 1.9.3 via RVM on our Mac Jenkins server.

I’d developed the Ruby scripts outside Jenkins so I knew they worked, but when I came to run them through Jenkins (using nothing more than the ‘Execute Shell’ build step) it started behaving strangely. Running the script caused the step to fail instantly claiming it couldn’t find any of my installed Gems.

A quick ‘gem query –local’ on the command line listed all the gems I needed were there.

As an experiment I added a build step that installed the Trollop gem (a fantastic gem, you should check it out!) to see if that made any difference, in effect forcing the install in the service that would run the Ruby script. I was surprised when the install worked, but it installed it for Ruby 1.8 rather than Ruby 1.9.

Adding ‘ruby –version’ as a build step showed that for some reason the Jenkins server was using Ruby 1.8.7 rather than 1.9.3.

It turns out that RVM is not available when run through a non-interactive shell script, which is a bit inconvenient when you need it run as part of an automated process.

Searching around I came across this Stack Overflow answer suggesting I make changes to my .bash_profile but those changes were already present meaning I had little luck in finding a solution.

Other suggestions involved rather convoluted steps to just get the thing working, something I neither had the time for nor thought should be required.

Luckily Jenkins provides a RVM Plugin which allows the build step to run inside an RVM managed environment meaning it will respect the RVM settings I’ve set outside of Jenkins…

Now when running ‘rvm list’ via a build script shows that we have multiple versions of Ruby available with 1.9.3 set as the default.

And all of a sudden my Ruby scripts work like a charm!

Setting up Git on ReviewBoard

I’ve recently made the switch to Git and started using it on a couple of project at work.  One of the important things I needed was to get automatic generation of reviews working on ReviewBoard for our Git server and I was in luck because it’s pretty simply to do.

I’m posting this here both for a reminder to me should I need to do it again and in case anyone trips over on a couple of steps that are not highlighted as clearly in the documentation.

 

The Git Server

ReviewBoard works best if you have a primary Git server (we’re using Gitolite at the moment) which most people clone from and push their changes to so using this with any GitHub projects you have won’t be a problem.  It’s against the content of this server that the reviews will be built against.  I went along the path of having a local clone of the repository on the ReviewBoard server (more on that later) so for now it’s simply a case of cloning your repository onto the ReviewBoard server machine, somewhere the user used to run the ReviewBoard server can access it.

 

The ReviewBoard Repository

One you have a local clone, you can start setting up your repository in ReviewBoard.

Hosting Service: Custom

Repository Type: Git (obviously!)

Path: This needs to be the absolute path of the Git repository on the ReviewBoard server machine, not your local machine.  In my case it was simply ‘/home/administrator/git-repositories/repo_name/.git’.  Since we have a number of Git repositories on ReviewBoard they all get put in the same git-repositories folder so it’s easy to set them up.

Mirror Path: This is the URL of the Git repository you cloned from.  To find this, simply run the following git command and copy the address from the Fetch URL line.

git remote show origin

My Mirror Path (because we’re using SSH over Gitolite) is something like git@git-server:repo_name.

Save your repository and now that’s done.

 

Doing Your First Review

Now you can start on your first review to see if everything is set up correctly…  One thing to note is that a review will only be generated based on what you have committed to your local repository.  So if you have unstaged local modifications they won’t be picked up.

So, modify your code and commit.

When using Post Review (you are using Post Review, right?) creating a review is easy – simply call the following from the root of your Git repository (you can make it even easier by adding this to the right button content menu in Windows)

post-review --guess-summary --guess-description --server http://rb-server -o

If all has gone well, the review should pop up in the browser of your choice ready to be published.

 

Doing Your Next Review?

Now this will work fine until you push what you’ve been committing.  Now when you next commit and try to generate a review you’ll start to get rather cryptic errors…

The problem is the repository sitting on the ReviewBoard server is still in the same state it was when you first cloned it as the content you pushed hasn’t been pulled and the ReviewBoard server doesn’t check whether anything on the git server has changed.  So we need to make sure the RB server is keeping it’s local copies up to date.

It’s a shame this isn’t built into the Review Process to be honest, but I can understand why, so we simply need to do the work for it.

All I’ve done is create a simple Ruby script which spins through the repositories in ’/home/administrator/git-repositories’ and polls whether anything needs to be updated.  If it does, it does a pull, if it doesn’t it moves onto the next one.

So as an example, just manually update the repository on the RB servers and try to post another review.  This time it’ll work flawlessly and you just need to set up a system to update the repositories that fits in with whatever system you’re using.

 

Creating Reviews Using Post Review

In the above examples we used the following command to create a review which pulled out all recent commits and generated a single review from that

post-review --guess-summary --guess-description --server http://rb-server -o

But there are other ways to generate reviews.

The following will create  a review containing only the last commit you made

post-review --guess-summary --guess-description --server http://rb-server -o --parent=HEAD^

This one allows you to create a review using the last [n] number of commits you made

post-review --guess-summary --guess-description --server http://rb-server -o --parent=HEAD~[n]

 

NiftyPerforce and 64bit Perforce Installs

I’m not a big fan of Perforce but it has it’s plus points and as a result we use it at work. One thing I don’t use is Perforce’s official Visual Studio SCC plug-in which I think it just trying to be too smart for it’s own good. I do use NiftyPerforce though which is a small, quick and simple plug-in that does exactly what it needs to do.

I recently installed it on a 64bit version of Windows running VS2010 and the 64bit version of Perforce. NiftyPerforce has issues with this, leading to the following dialog when you try to use it.

Could not find p4.exe installed in perforce directory? Well I know it’s there because I use it every day…

If you have a look at the NiftyPerforce output, there’s an interesting line in there…

Found perforce installation at C:\Program Files(x86)\Perforce\?

What? There is no ‘C:\Program Files (x86)\Perforce\’ folder since the 64bit version of Perforce is actually installed to ‘C:\Program Files\Perforce\’

I quickly Google’d the issue and was surprised that there are a few bugs reported on this issue but nothing else.

Anyway, the solution was actually quite simple (if a little hacky). I created a Symbolic Link to the actual Perforce install so to NiftyPerforce it looks like Perforce does exist in ‘Program Files(x86)’

> mklink /D "C:\Program Files (x86)\Perforce" "C:\Program Files\Perforce"

Interestingly it actually works if you remove the /D option too, but maybe that’s just a quirk of Windows?

Now when NiftyPerforce looks for our Perforce install via the (x86) directory, it’ll link nicely to the directory that actually stores the executable.

I’m waiting for this to blow up in my face as other tools start to get confused as to where my Perforce install is, but so far, so good…