In short: Give me a threat model, by which I mean realistic groups with goals and capabilities, and we can mitigate the risks. Just talking about "security" in a void is pointless at best and harmful at worst, since ill-considered security measures have a habit of leading to even worse insecurity.
Depending on how you define this word (actual vs perceptual security) we would adjust the type of conversation accordingly.
A lot of "security" is theatrical crap that skilled developers don't have patience for.
Security is important but for most things it's not the most important thing. Useful but insecure can be used for something. Useless but secure can't.
First of all, you see how things disappear from the list or go down?
Yeah that is because a frigton of opensource maintainers, usually unpaid devs just so you know, spend a lot of time making sure these attacks are impossible.
Like i am sorry but the whole software world is in love with Rust and spitting on C for a reason. We care a lot about security.
What we do not care about is something unactionable or unrealistic. And that? That is what security bring to the table.
Put it otherwise. Being big on the infosec workd buzzwords does not pay devs bills and do not generate money. So we solve the problem in our tools.
It just happens that this work is invisible to you and society in general.
Put otherwise. If we did not care about it, the list would be far far worse. Like we would not have TLS. Think about this a minute.
People are mostly motivated by gratification. You've written _functioning_ code - you can instantly see how it solves the problem at hand. You've written _beautiful_ code - you can stare in satisfaction at the negative total in `git diff --stat`.
You've written secure code - you reward comes in the form of nobody talking about your code for the next twenty years ;)
The security in our domain is so bad[1] that I think the only way to improve is via regulation. I can't for the RedDirective to be in force in 2024 and many products becoming illegal in EU. And I can't wait for the cybersec resiliance act to come into force to expand the RED directive to backend systems, and I can't wait for the Data Act to stop our incompetent middle managers saying "Data is the new oil" ...
While "compliance != security", the industry has done an abysmal job in coming up with better guard rails itself, so at this stage even a poorly written law is better than nothing. Most of my colleagues (who are still there) are so pissed with the internal attitude of companies that they openly joke that they hope the company get's breached because it's the only way management will learn.
firefighters and arson:
Until working in IoT I thought it's a strange coincidence that some fire-fighters are suddenly becoming arsonists.
The Blairwitch Project:
The behavior of the protagonists in that movie made me so upset that after the first 15 minutes I couldn't wait for the witch to arrive to kill them all. My job the bast 15 years was not unlike the first 15 mins of that film.
While I would never do anything to hurt these companies nor defend anyone who does, I do understand now a lot better why somebody is rooting for the witch to come or why a firefighter would set fire to a place.
The worst of it is that while 3 of us in a company of 80K employees are in charge of securing the product we will likely get hacked in the next 12-16 months. And even we informed management and continue to be gaslit by middle managers for "stopping progress" it will also be us who get blamed for not having done enough once the witch comes knocking.
That said the situation is even worse for players in the US who have great frameworks and standards but lack of regulation and binding laws.
[1] yes I'm speaking for the entire industry since I'm also chairing several industry alliances with a strong security track and have good insight over the attitude of most companies
We have a team dedicated to performing security reviews for most code changes. Your change can't go to Production until it has been assessed by someone from the secure code team. They can request rework if they think you have introduced a vulnerability.
We have regular pen-testing performed.
There are various vulnerability scanners running against the code repos and blocking builds if a dependency is identified to contain a new CVE.
The project I mainly work on has been in active development for decades. It has well defined frameworks for many common actions. Most of the time we are working within those frameworks, which have been already been vetted thoroughly. Ironically, it is rare that we would need to touch code in a way that could introduce a vulnerability.
In a previous role I worked on desktop applications for engineering simulation. There was no requirement for secure coding for those projects as there was no central database. All the models were file based.
So it depends on the project and the risk and consequence of an malicious actor finding a vulnerability and exploiting it. The health and finance sectors have to take secure coding seriously. From experience, many Oil and Gas companies are also super strict on controlling data access and will often request proof that a software application has been security reviewed and pen-tested.
This is just selection bias. Your job is to make people comply, most likely with something that hasn't been a priority until something happened to make it a priority. If you have a team built around shipping lots of functionality quickly, well, you are going to get a lot of blank stares when you start slowing things down. So, compliance is a hard job. And people who haven't made security a priority over time are not going to have much awareness of best practices, or OWASP.
> It appears to me that the development of secure software is primarily led by cybersecurity folks rather than developers themselves. Whats HN's view on this?
If you want to shift from "move fast and break things" to quality + security (which are peas in the same pod), it will often take new leadership. The processes, people, systems, and culture will need to change a lot. An observation: being good at finding problems is not the same as being good at fixing and preventing problems, and this where you have to be careful when you are changing leadership. It's not as simple as hiring a certification. It's actually a big organizational challenge.
Everyone has their own particular things they happen to become interested in, so some few people are into security, just like some few are into stamps.
But outside of that, security is merely important, not useful or interesting or productive or functional or cool or fun.
Fundamentally the useful function of any machine is to do something. The problem that is interesting and fun and gratifying to solve is how to get that something done at all.
Other considerations like optimizing the design for some parameter like manufacturing cost or materials availability or reliability or field repairability, or writing the documentation, are all secondary puzzles that can be gratifying but are far less so than the initial creation. It's gruntwork and 'mere engineering'.
Figuring out how to ensure that the machine can only be used by approved people in approved ways at approved times and log all activity for auditing... that is even less interesting than optimizing or documenting, unless that just happens to be your kink.
It's hardly a boggling mystery. It's like asking why developers don't seem to like meetings and 7 bosses.
If you can calmly think about something to automate, made good scenario, do some mental experiments etc than you can craft good code, witch happen to be also secure in general. But such model is long lost.
This is almost but not quite secure code - the thread in common is that secure and robust code both have to sensibly handle and happily continue in a known state no matter what the input.
Hackers use fuzzers to concoct evil input that forces exploitable states.
The real world applies sod's law to any input that comes from sats, planes, instruments, etc.
Power flucuations, lightening strikes, disconnected wires left dangling and sparking, N+10 hard resets where N is a number so ridiculous nobody would ever turn it on and then off again quickly that often, ... etc.
When you circle back on any such signal collation for analysis, image generation, cross reference poor handling of unexpected input will cause such a mess that inevitably the habit develops to code excessively defensively, as too much paranoia is rarely enough.
DevOps is just a fancy word to signify that modern system administration requires coding proficiency.
Agile is a project management framework useful for IT projects.
I don’t see how those three phenomena are related?
And we don't know who's actually going to be attacking the internal software!
It's only gradually, over many years, that I built up my knowledge of secure coding, especially after I started dabbling in front-end web development.
I'd have thought things have improved now because web and cloud are the first choices everywhere. But after seeing your post, I'm wondering if what I experienced is still the norm.
This must start at the bottom: The language, the DB/Store, Privacy, the management of secrets.
Only recently this more or less have start to get some traction (Rust, Apple insistence, etc).
But as long JS, C, Java, C++ rule and they don't have "secure code per default" and "the login sample use proper APIs" everything else is impossible.
I'm a regular dev. Anyways, what specifically are you suggesting?
The problem is that you spend your days in a niche, a relatively well defined and limited scope, i.e. "compliance".
Developers on the other hand are working in a much broader context, i.e, working through the requirements, architecture and design, project management (who's doing what and when), the actual implementation, finding and fixing issues, etc, etc. And of course, compliance.
I totally agree, and I am pretty sure that any developer will agree that compliance and security should not be an afterthought but it just happens to be that in your world compliance is the most important thing. So, why is not not for everyone else in the world?
Maybe you should actually be working on projects with developers so that you can lead and help the team in this space? In any team you see that various individuals have slightly different specializations, yours could be compliance and security. This would help a team.
What we don't do is have a checklist to go through. We haven't considered these things, because we have _eliminated the posibility_ of that particular issue.
That doesn't mean it is always safe: it just means that there new and exciting hacks to discover.
As an implementer of APIs, most security concerns are done during auth. The only usual things are care in handing of externally supplied values (e.g. database ids or strings) and maybe redacting some parts in logs. Every now and then a new thing gets made that needs its own set of permission scopes and assignments to apps etc.
> development of secure software is primarily led by cybersecurity folks rather than developers themselves
"cybersecurity folks" who develop secure software are "developers". What you're really saying is that app feature developers, etc don't focus on security, which they should but only as much as they need to which isn't very much.
TL;DR: I bet most devs do care about security, but it's just one aspect to prioritize against other goals, and it's a task that can never be fully completed, not pro-actively without specific threats at least. You can never check that "this project is now secure against random attacks" checkbox.
I can hand a stranger $5 without having to trust them with access to my wallet.
I doubt anyone, other than Jart*, can show me how a user can hand a document to a program they don't trust, without risking all of their stuff (and the OS) in Windows, Linux, Mac that is just as transparent and easy.
You want programmers to spend time and effort routing around missing security infrastructure, which is always going to be futile, and they know it instinctively.
*Because running a Linux executable in a browser offers a path towards LPE (Least Privilege Execution)
Unfortunately, we live in a real world with endless corner cases, so real-world software ends up having to address those corner cases - meaning more code, and more data to work with. Such software is also rarely unburdened by deadlines and prioritizations from management - meaning that as much as programmers would like to pursue security hardening, there's rarely time to do so.
In other words: you're barking up the wrong tree. Take it up with my manager/customer/etc. demanding I devote my time to new features instead of even so much as tech debt reduction (let alone actual security hardening).
All they did was to email checklists full of demands for specific tech or protocols to be used that either was completely unnecessary, actually wouldn’t work for our project, had publicly known security issues, and/or would not make the system more secure. They almost always had no clue when I asked them for the threat model they were basing their demands on.
Don’t get me wrong. I am sure that there are competent “cybersecurity folks” out there. I just haven’t met a single one in my 30+ years career. YMMV of course.
so, my question is, where do i need to consider "security" in this situation, which is common in all the banks i've worked for?
The real problem is that software development is an immature profession. It's a Wild West, anybody can identify as a developer. There are no educational or professional hurdles to overcome before you get provide your services.
No developer has gone to jail, or had their licence revoked for well-intentioned, but bad code. Incompetence is largely consequence-free, so the incentive for thoroughness is only present in the mind of the individual developer.
That said, security is difficult, time consuming and expensive.
Ultimately, meeting their compliance objectives and addressing the threats is their business.
We (infosec) can, and will help, but the outcome is their responsibility. We provide threat models, guidelines, patterns, and consultancy to the tribes if they need it (as well as SecOps, etc).
Ensuring the incentive model makes security part of the problem they need to solve we manage to avoid the constant fights - infosec is a "pull" service at the design stage rather than an endless, fruitless "push".
None of these could really be used to exploit anything.
If you just follow some common sense and start with a good architecture initially, security isnt something that needs extra particular attention.
If you make a process with too much friction, it might be secure but you'll get user churn. Plus a lot of end users are straight up idiots, making something too secure means a lot of them won't be able to use it.
But that said the only time I've seen insecure coding in the wild is the endless blog spam on sites like medium etc.
Would be interesting to know if there is some kind of reward model out there that solves this.
OWASP is for security professionals which is why you heard about it from them.