When someone tells you to use MSAccess, your immediate response should be “I quit!”
I feel that any tool where you assign “blame” as a central feature is not a good tool.
I’ve worked in a production environment before and I wish we had a tool like Git back then. It would have made the whole process of moving programs between environments and making changes better and probably easier. I can’t even imagine how pissed off I’d be if I had to use Git for putting together a rate filing or doing monthly loss ratio reporting.
There are good non-blame reasons to know who made what change.
Assigning blame has more to do with how you use it than the tool itself. If you’re working in a blaming environment, the same will happen whether or not you’re using a version control tool. It’s just takes more work to find out who to blame.
I would be pretty salty about that, too. I suppose it’s relevant that I hate MSAccess. But that passes my basic test – the whole group has access to and can use SQL Server. So it’s not a “key person will leave and no one can figure out what he did” risk. (in fact, MSAccess is rife with that risk, in its very nature, but I digress…) If a bunch of people can use the tool, and it’s better for the people who need to do the work, you should be using the better tool.
I’m not a fan of the “blame” terminology in git. I think it is trying to be a bit cheeky with that; most other version control systems don’t use that term.
I wouldn’t want to take an entire IT production process and transfer it to something like a rate filing. But for me, not having to worry about a file being accidentally changed, or by whom, or knowing what version was current on a specific day, or having to come up with a backup version name, is a big stress relief, including for something like a rate filing. I’m sure mileage varies though. And there are definitely other ways to reduce the risk of those things happening.
I think this is key. If you really want to start improving analyses, then you have to start looking at changing the system that supports the analyses. What software people have, what skills they have, etc.
If people don’t know sql, that isn’t a reason to not use sql, necessarily. but op must think of it in terms of changing the system, not simply in terms of his own, individual effort or analysis.
The blame terminology is representative of the way IT people think about what they do. The name isn’t just cheeky, it is how they are covering their butt when something goes wrong. This way they can point out that they implemented exactly what the BA told them too and the QA tester found no issues. It’s all in the documentation and can be traced back to this other person/group and it isn’t my fault. These types of tools are designed to pass blame and that is the central feature you are paying for. I can’t think of a single project I’ve worked on in the last ten years that would have been made better by using this kind of tool.
The side effect of using blame to provide version control is that you can trace down very precisely when and how things went sideways. This is very useful for impact analysis and damage control. I no longer work on things that have that kind of downstream impact so my view of these types of tools for business side usage will definitely be skewed.
FWIW, we do a lot of software dev, shoot it’s half my day. I dunno if the dev uses git, I suspect he does - but it’s just for him. It got mentioned once, and I’m like, no thanks I don’t need that overhead. And I’m pretty technically literate.
Instead, we use a ticketing system. Not a versioning control, but close enough. It’s basically threaded documentation on software projects where every step is recorded and all the back-and forth between me and the dev. That’s super easy for me to use, it’s effective, and I don’t need to bother with version control.
The other thing we do, somewhat of a tangent, is an inhouse IM program. We’re back and forth on it all day long, just bouncing quick messages. It’s also got video conference so if it’s longer that two sentences I just IM ‘got time for a call?’.
We were using zoom, but every call has to be scheduled, so now I only use zoom for…scheduled calls like our weekly meetings.
But in a scientific or analytic framework, the emphasis changes. It is not longer about meeting some set of specific external requirements, which usually no longer exist in the same way. Instead it is about reproducibility. If i use your result (or my own from a different analysis) then i want to know exactly how it was calculated so that I can reproduce it. And version control makes that reproducibility much easier, at least in my experience.
My hot take is that rate filings should be reproducible with the entire workflow source controlled. I take it some output is like a pdf or something with some auxiliary spreadsheets that goes to some government bureaucrat for their stamp of approval. Like at any point in time like 10 years later you ought to be able to press a button and then the program pulls data from a database, does some calcs, and then spits out the pdf and spreadsheets exactly like they were 10 years ago including all the colorful formatting and stuff. LaTeX, knitr, sphinx, RMarkdown, whatever. The technologies are there.
Then again I know this is pretty far out there and a tough sell for you actuaries so I do use MSWord in this case to accommodate people.
Overall i agree, but i think there are some tensions there that must be acknowledged.
A lot of the reproducibility literature is concerned with scientific knowledge. And in science, you usually don’t want your data “contaminating” the decisions you make about your analysis. So much so that science often “blinds” the actual content of the data until the analysis is done.
This lends itself to the kind of reproducibility you are talking about, where you basically press a button and get the result, And you should be able to change the content of the data, press the button again, and get another equally valid result.
I’d argue that actuarial work is about knowledge for a business decision. Science is part of it, but ultimately the analysis itself is going to depend on peculiarities of the business needs, and of the content of the data itself. This makes it less suitable for the kind of reproducibility where you press a button and get an answer. Instead reproducibility means you record all the individual and peculiar decisions you make based in the data, but don’t necessarily try to automate or generalize those decisions themselves.
We had exactly this at one place I worked. There was a system to pull together rate filings, once you had all the data it was the click of a button - there’s your explanation memo, there’s your supporting exhibits, there’s things that are state-specific, there’s a sheet saying what needs to go where in SERFF, it’s just a matter of the filing analyst taking it and pretty much following instructions. It was elegant and it was brilliant. Whether you could re-create old filings, I don’t know (if the data was there, you probably could), but certainly rate filings were an incredibly simple part of the entire process.
I tried suggesting it at one place. It became an assigned project on top of the 137 others I had, and then got hijacked and corrupted by the chief actuary as he did many other projects. I started implementing it at another place - at least getting a basic set of exhibits - and then the manager there flaked out and decided the existing completely manual system was better. It’s currently a pet project where I’m at, on top of about a dozen others I have.
My issue is not with the versioning. Versioning is great and needs to be done. My issue is with the assertation that an IT tool is the right tool to use just because it works. Maybe it fits exactly with the type of work you do and produces no extra overhead whatsoever but I’m highly skeptical. You can use a tuba to play Aqualung instead of a flute but it isn’t Jethro Tull.
That is quite the hot take. It isn’t realistic for two reasons. First, the storage requirements are cost prohibitive for the benefit. For my company that would be storing hundreds of gigabytes of detail data for each filing. Memory is cheap, not that cheap. Second, their is a time horizon for everything that is produced and keeping the data beyond that point creates a liability.
Maybe one reason for our different perspectives is that i don’t see it first as an IT tool.
Version control has been used in science and scientific programming for decades. That process probably has as much claim to version control as the current “iteration” of software development (since version control was originally developed a long tkme ago when IT work and workflows were presumably very different.)
So to me, it’s not a question of whether this IT tool should be used. It’s a question of whether this research science tool should be used. The big difference then is that actuarial work is often not done through a programming language. And that can make version control less appropriate for some applications.
Sometimes i see people trying to turn analysis processes into engineering processes, which i am not typically a fan of. If i had the same experience with version control it sounds like you had, through a maybe controlling IT environment, then i could see myself being much mote skeptical.
My motivation for this question leads to another question. I made this thread because let’s say there are companies that pay 1.5x for actuaries who can do a thing Y.
If my company isn’t going to let me do Y I am in a situation where I won’t be able to earn 1.5x which adds to how pissed off I am when I am told I can’t do Y. How do I earn 1.5x when I can’t do Y? Such an order makes me feel doomed to my pay band forever and I say you either let me do Y and pay me 1.5x or I am going to do Y regardless of what you say and how you appraise me and wind up at a company that pays 1.5x after I get good at it.
This has happened a few times and I haven’t been able to settle this amicably. Is there even a way?
So go somewhere that let’s you do Y
Another option that sometimes works is to do it both ways, especially if the second “fancier” way is done outside work hours. This would let you honestly say that you have done projects demonstrating those skills.
You can sell it like you think these additional techniques are a good idea, so much so that you are willing to back it up with your own time.
More generally on your own time you can probably use their data and computers to learn these skills, and then once you have learned them go someplace else if your current employer won’t let you use them.