HACKER Q&A
📣 SimingtonFCC

I’m an FCC Commissioner proposing regulation of IoT security updates


Hi everyone, I’m FCC Commissioner Nathan Simington, and I’m here to discuss security updates for IoT devices and how you can make a difference by filing comments with the FCC.

As you know, serious vulnerabilities are common in IoT, and it often takes too long for these to be patched on end-user devices—if the manufacturer even bothers to release an update, and if the device was even designed to receive them. Companies may cease supporting a device well before consumers have stopped using it. The support period is often not communicated at the time of sale. And sometimes the end of support is not even announced, leaving even informed users unsure whether their devices are still safe.

I’ve advocated for the FCC to require device manufacturers to support their devices with security updates for a reasonable amount of time [1]. I can't bring such a proposal to a vote since I’m not the chairman of the agency. But I was able to convince my colleagues to tentatively support something a little more moderate addressing this problem.

The FCC recently issued a Notice of Proposed Rulemaking [2] for a cybersecurity labeling program for connected devices. If they meet certain criteria for the security of their product, manufacturers can put an FCC cybersecurity label on it. I fought hard for one of these criteria to be the disclosure of how long the product will receive security updates. I hope that, besides arming consumers with better information, the commitments on this label (including the support period) will be legally enforceable in contract and tort lawsuits and under other laws. You can see my full statement here [3].

But it’s too early to declare victory. Many manufacturers oppose making any commitments about security updates, even voluntary ones. These manufacturers are heavily engaged at the FCC and represented by sophisticated regulatory lawyers. The FCC and White House are not likely to take a strong stand if they only hear the device manufacturer's side of the story.

In short, they need to hear from you. You have experienced insecure protocols, exposed private keys, and other atrocious security. You have seen these problems persist despite ample warning. People ask, ‘why aren’t there rules about these things?’ This is your chance to get on the record and tell us what you think the rules should be. If infosec doesn’t make this an issue, the general public will continue falsely assuming that everything is fine. But if you get on the record and the government fails to act, the evidence of this failure will be all over the Internet forever.

If you want to influence the process, you have until September 25th, 2023 (midnight ET) to file comments in the rulemaking proceeding.[4] Filing is easy: go to https://www.fcc.gov/ecfs/search/docket-detail/23-239 and click to file either an ‘express’ comment (type into a textbox) or a ‘standard’ comment (upload a PDF). Either way, the FCC is required to consider your arguments. All options are on the table, so don’t hold back, but do make your arguments as clear as possible, so even lawyers can understand them.

I’m here to listen and learn. AMA. Feel free to ask any questions about this or related issues, and I’ll answer as many as I can. I just ask that we try to stay on the topic of security. My legal advisor, Marco Peraza, a security-focused software engineer turned cybersecurity lawyer, will be answering questions too. I’m open to incorporating your ideas (and even being convinced I’m wrong), and I hope that my colleagues at the FCC are as well. Thank you!

[1] https://www.fcc.gov/document/simington-calls-mandatory-secur...

[2] https://www.fcc.gov/document/fcc-proposes-cybersecurity-labe...

[3] https://www.fcc.gov/document/fcc-proposes-cybersecurity-labe...

[4] If your comments are purely in response to arguments made in other comments, you have an extra 15 days, until October 10, 2023.


  👤 trelane Accepted Answer ✓
How about requiring devices to accept alternate, Free Software firmware, from the upstream provider?

At the very least, it should be possible after some time period of no updates or insecurity, but a blanket requirement is less susceptible to games.

Probably the best thing to happen to wireless routers is OpenWRT and the other descendents of the WRT firmware.


👤 hannob
> The FCC recently issued a Notice of Proposed Rulemaking [2] for a cybersecurity labeling program for connected devices.

That appears to me to be the wrong way to go about this, and it has specifically to do with how IoT security is a problem.

The most severe case of IoT security problems we have seen were things like mass botnets, where plenty of devices of the same type were hacked and then used for things like DoS attacks. Notable cases include the DoS attacks against Brian Krebs for some of his reporting.

The important thing to understand here is that the device owner is not the primary victim. That's a third party.

This is not about consumer choice, because consumers by and large do not care, because they are not the people being affected by this. An optional security label tries to adress it as a consumer choice problem, which it isn't.


👤 coldpie
FWIW, seeing a security compliance label on an IoT product wouldn't mean anything to me as a consumer. There is no such thing as computer security in 2023, and there are no hints that security will exist at any point on the horizon. Even the biggest names in the field cannot put out secure products. Products from well-meaning manufacturers are going to be absolutely riddled with security problems, and putting a sticker on the box won't change that. It is literally impossible to put out a software product with anything resembling security today. It'd be like putting a "secure against bricks" sticker on a window. Our industry is a joke. Building secure software products can't be done without completely rearchitecting how our industry operates, which isn't going to happen.

👤 ilamont
commitments on this label (including the support period) will be legally enforceable in contract and tort lawsuits and under other laws.

When it comes to U.S. laws that touch technology, enforceability is a mess. Spyware, spam, fraud, misleading labels, etc. are already governed by various state and federal laws, yet enforcement efforts are whack-a-mole at best.

For IoT devices, having the proposed requirements sounds good in theory but I fear it is practically unenforceable, particularly for consumer-grade devices manufactured overseas.

However, if powerful IoT platforms are also tied into the new regs - with Google, Amazon, Apple, Microsoft, PTC, HPE, etc. required to audit supposedly qualified devices and ban those that don't meet the standards, with escalating penalties for failing to do so - that might shift the needle.

My 2 cents.


👤 starik36
These rules sound like reasonable steps, upon first reading. Not sure what the downstream effects might be.

Is there any thought given to cloud based devices becoming paperweight when companies behind them just stop supporting it or turn off the API? I'd like some "assurances" in place that if the company either goes out of business or decides to sunset the service, it would be required to open source (or at least make available for download). If memory services, that happened to some Nest models a while back.


👤 woke_neolib
What makes IoT devices special, and warrants carve outs for security / vulnerabilities?

I guess I am not surprised there are security issues with these devices, because I think of most of them as coming from small companies, and wonder what the impact would be on the IoT space if only large players can work through more regulation.

That said, I can't decide if I am more concerned about my Wyze camera sending data where I don't want it to, than my water heater leaking it's current temperature (implicating whether I am home or not).


👤 user3939382
We’ve seen manufacturers abuse ongoing access to devices to turn off features the device came with at the time of purchase or convert one-time-fee features into subscriptions. One of my concerns is that security updates are strictly defined in a way that prevents this type of regulation from being used as cover for these shenanigans.

👤 burkaman
> Accordingly, incorporating our modifications, we propose, for purposes of the IoT labeling program, to define an IoT device as: (1) an Internet-connected device capable of intentionally emitting RF energy that has at least one transducer (sensor or actuator) for interacting directly with the physical world, coupled with (2) at least one network interface (e.g., Wi-Fi, Bluetooth) for interfacing with the digital world. We seek comment on our proposed definition.

I think this definition would apply to stuff like phones and cars, right? If so that's great.


👤 ChrisMarshallNY
Awesome!

Thanks for engaging, where the rubber meets the road!

Hopefully, you are also looking into other venues, as well.

HN has a great group of folks that represent some of the most cutting-edge tech, but IT runs on Java 8[0].

[0] https://news.ycombinator.com/item?id=19877916


👤 yellow_lead
How can we define a security vulnerability meeting this "serious" requirement? There are a wide variety of vulnerabilities, so it seems difficult to define this strictly.

👤 sa1
I wonder how companies will simultaneously deal with these FCC regulations in the US, and the proposed regulation to not update anything without intelligence approval in the UK.

👤 downrightmike
Hopefully automatic updates by default gets in there, because no one is going to manually update nay of this

👤 scrollaway
What’s your approach wrt. open source, and generally preventing security through obscurity?

👤 eru
Hurrah, more red tape!

Please let customers opt out of your proposed protection, if they want to.

(And it sounds like that's already the status quo. So perhaps you could use your time to figure out where you can cut obsolete and cumbersome regulations instead of adding more mandatory bureaucracy that customers evidently don't want enough to pay for voluntarily?)

There might be an argument to be made about negative externalities. The economic standard answer is either let the Coase Theorem sort it out, or to tax the externalities. Not to ban what you don't like.

https://en.wikipedia.org/wiki/Coase_theorem


👤 ptero
> I’ve advocated for the FCC to require device manufacturers to support their devices with security updates for a reasonable amount of time [1].

No offense intended, but I would be worried about this more than I would be worried about the current state of the IoT world. A blanket requirement would punish hobbyists and small companies prototyping new technologies. But big players could spend relatively minor technical and legal resources for publishing regular "security updates" without trying to find and close the biggest security holes.

I would prefer that FCC works to inform: maintain an up-to-date database of issues (reported by both the manufacturers and by third-parties), impacts and recommended fixes for those that have a fix. My 2c.


👤 prognu
Simple. Give the manufacturers the choice: either they must provide full (FLOSS) source code and documentation (full schematics) to the user to enable them to maintain, patch and thus secure their devices (see also: right to repair), OR they are liable for all damages (direct, indirect) for a 30 year expected lifetime that arise from security issues with the device AND must have insurance to cover those damages (so that they cannot get out of that liability by bankruptcy). Most will opt for FLOSS, and none will have the excuse that it would be more secure to make it proprietary. And then users will at least be able to fix issues -- and the security community will be way more effective at finding issues as it wouldn't have to do the slow reverse engineering.

👤 SoftTalker
Consumers consistently vote with their wallets on this, and based on their behavior, they don't care.

They will buy the cheapest devices they can find on Amazon, made somewhere in the far East, and as likely to set their house on fire as punch a gaping hole in their home computer network, when there are much better made, well-supported alternatives but they cost more.

If you want to make a difference, an FCC sticker won't do it. Consumers will go for the cheaper one without the sticker. You'll have to mandate whatever minimal level of support you want to get.


👤 legohead
At the very least, manufacturers should have to notify all users or via press release when a product has been compromised or can be compromised via a known attack, with maybe some required details like the severity of the breach and possible outcomes.

👤 johndhi
I'm generally skeptical of the efficacy of regulation to solve a problem. Can you please cite some examples of where FCC regulation has been successful in solving other problems, and explain why you believe iot security regulation is likely to help?

👤 janalsncm
There are too many IoT devices that want my email/phone just to perform what normal devices have been able to do for decades. No, I don’t want to download an app just so I can use my apartment stationary bike. I get enough spam already, and I don’t want to agree to a long terms and conditions just for that. In that case I couldn’t even use the bike at all without creating an account.

I think a lot of places got duped into thinking their internet connected stuff was an upgrade but in my opinion it’s a major downgrade. A device should do what other non-IoT devices do without being online, and internet capabilities should only be a value-add. A toaster should make toast without being online.


👤 doitLP
There’s some great recommendations in this thread but I just want to thank you for engaging with this community to solicit opinions from the trenches.

This is really meaningful to most of us who see the regulations in our lives as something far away that we can’t influence.

Another reminder for everyone that while you likely can’t influence something like a presidential election on your own, you can influence many other spheres with your knowledge and time that are closer to home and probably affect you more immediately.


👤 mrweasel
> Companies may cease supporting a device well before consumers have stopped using it

In which case all information required to create and load custom firmware should be released to the public. This information should be placed in escrow, in case the company ceases to exist. The same rule should apply to backend services, in case of a device being depended on such a service to operate.

> security updates for a reasonable amount of time

Which is 25 year or more for some classes of devices. Phone have already reached a point where they should be required to come with 10 years of security updates. I'd expect light switches to get at the very least 20 years of security updates.

Generally I believe that governments are being WAY to lenient towards manufactures of any type of electronics when it comes to updates. It's bad for security, the environment and the causes consumers to make bad investments. The companies making these devices have long since proven that they DO NOT CARE and shouldn't be trusted to deal with the issues themselves.


👤 encyclic
Planned or unplanned obsolescence is good for business. You are proposing regulations counter to that, so should expect counter-pressure, even for IoT makers that want to do the right thing.

By volume and impact, what devices have IoT vulnerabilities? If from large mfrs, you might expect some measure of support as that would be somewhat in their best interest, if only to preserve their brand image. My concern would be low quality, usually cheaper, whack-a-mole mfrs that come and go on Amazon, eBay, etc. Even if they release a product that would fall under these guidelines, how are you going to go after a ghost?

Also, what happens when an IoT mfr is acquired, does the acquirer assume all the IoT risks as well?


👤 JoshTriplett
A mechanism requiring disclosure of how long security updates are available seems like a great step.

Another great step would be a guarantee of making the firmware Open Source after no more than a certain amount of time, and having that guarantee known at compile time. Effectively, that means the device will always be supportable.


👤 xoa
Thank so much for posting this here first of all! I agree with other comments that rather than, or in addition to, some direct required period for security updates I'd really like to see a dynamic setup along the lines of "Power Means Responsibility": manufacturers can stop supporting devices when they wish, but must at that time release all keys and IP licensing needed for hardware owners to take over. If a company wants to keep supporting something, and in turn keep their power over deciding how it works, for 10 years that's fine. If they want to drop it after 6 months and make the firmware fully source available and allow owners to add their own root keys to devices that'd be fine too. Or someone could offer something fully open source with no strings attached but also no responsibility attached. The market can fill with a range of decent options.

But manufacturers shouldn't be allowed to have it both ways, with control post-sale over their customers' hardware AND no responsibility to support it. It should be directly linked by law.


👤 ruffrey
Many of these devices are made in China, even if designed and sold by American companies. Nearly all contain Chinese made parts. These are network devices with sensors and various behaviors. Given the tension between USA and China, especially in the cybersecurity realm - what about making the case based on US national security (in addition to consumer protection)?

👤 mattkrick
Voluntary certification, please. Law is slower than technology. This is a good thing! EnergyStar is a great example of a voluntary program doing more good than DoE or FTC mandates. HIPAA is a good example of what happens when mandates can’t keep up with technology. When it comes to security, we can’t afford another HIPAA.

👤 jddj
I don't know who those manufacturers are, but I can guess. One left-field piece of advice I would offer is looking at what they specify for their own offices/HQs.

"Smart" commercial office space has a bit of a head start in this area, and the specifiers have now had some time to find their feet.

Some prominent IoT device manufacturers have had written into the specifications for their own buildings that, for example, the end user (read: them) shall be able to manage the certificates on the devices, and so on and so forth.

A couple that come to mind would make for nice blueprints for consumer protections if you can cut through the prescriptive talk about preferred tech.


👤 fidotron
While the IoT security situation is out of control I doubt that regulating security updates will have any other result than radically reducing competition and innovation in the space by making it impossible to operate as a small company. It will simply push more hardware innovation out to China.

Having worked in the space I came to the conclusion the only viable secure future is to adopt star topology local networks where local traffic for all devices goes into a single secure regularly updated broker device that then decides what to do with it. Any access out to the Internet or between devices needs to be mediated. The part that will annoy a lot of people is that broker device probably needs to be able to read all of it. Therefore rather than just regulation I would talk to the WiFi alliance in particular about possibly expanding the scope of what it means to be a WiFi router.

The problem is actors like Google really want such a thing to be just in the cloud "for convenience", by which they mean monitoring everything that ever happens.

The only corporate actors I encountered that understood this were the Taiwanese OEMs, who are remarkably on point and blunt behind closed doors, but they are basically powerless to do anything about it.


👤 wjat
How about requiring devices to implement key security-relevant features in an immutable way, such as via FPGA, so that attackers have no way of circumventing those features even post EoS.

👤 BearhatBeer
How will you define the difference between IoT, embedded, and devices which just happen to be tucked away?

👤 throw1234651234
Could you summarize what these "security updates" would entail? There are 1000s of attack vectors, so it would be difficult to practically define.

It's awfully vague to the point of just being more regulation that allows selective enforcement, which reduces competition and leads to boilerplate allowing FCC department growth, without improving the situation.

It will serve as a barrier to entry to new entrants. What's worse, is that this will create the illusion that IoT is in any way secure. Some IoT is secure, but that standard is unrealistic to apply to consumer devices.


👤 quercusa
If a device does not violate (e.g.) RF emissions regulations, what authority does the FCC have to regulate its internals?

Thanks!


👤 at_a_remove
[delayed]

👤 adolph
The best regulation is written as if it would be implemented and/or interpreted in the worst possible way. As if it would be used by your worn enemies against you.

Would manufacturers be required to brick devices that cannot be fixed in software? For example, if an MCU didn't have a hardware implementation of a particular crypto function.

Could a device be sold even if the end user had to take action in order to update the firmware? For example, would a manufacturer be encouraged to have every device "phone home" for "updates" without a method of operating in a network where outside Internet is unavailable?


👤 jcpham2
Unfortunately ever since Ajit Pai dropped that net neutrality Harlem Shake video the same day him and his cronies snuck that legislation through - well let's just say it's difficult to trust or believe a person who speaks on behalf of the FCC, a three letter agency that was bought and paid for long ago.

It's like trusting the SEC to do well, anything.

I have a suspicion that you're more beholden to your lobbying constituents than you are to random HN commentators. I can only hope I'm wrong.


👤 mike_hearn
As someone with a libertarian bent, meaningful labels appeal to me as a decent way to address problems without overriding the judgement of the market. An informed market avoids lemons. So this proposal sounds OK in principle but here are some questions. Please be aware that I'm not a US citizen so my views don't really matter here, I'm just looking over the garden fence and asking questions.

1. Your argument for why it's under the FCC's jurisdiction doesn't seem all that strong. In your linked speech, you argue it's important because the FCC has the ability to regulate signals interference, and insecure devices could be turned into jammers. Has this ever actually happened? If not, is this not rather a large stretch of the FCC's mandate? Perhaps this sort of effort belongs in a different part of the government, or in an international standards agreement (possibly non-governmental).

2. What's the definition of security you're using? Security problems always exist in the context of a threat model, so having a label would imply standardizing a threat model. For example, smartphone security systems were originally designed to block malware, but over time have been stretched to try and solve often vaguely specified privacy goals towards non-malicious software too. If someone commits to supporting security updates for five or ten years at risk of government censure, then the definition of security is going to become a battlefield because whoever wins gets to control all the software that's got this label.

3. Modern security is layered via defense-in-depth strategies. If there's a bug in an inner layer but it's not exploitable due to mitigations or sandboxes (software firewalls) in outer layers, is that a mandatory security update or not? It could be argued either way because the device is not technically hackable still, simply the armor became weaker. Today this is left to the best judgement of engineers, who must balance efforts to patch theoretical vulns in old devices with work to e.g. build new defenses for newer devices. If it becomes mandatory, then paradoxically, new devices may become less secure than they otherwise could have been because all the effort is going into patching old devices.

4. Imagine a company commits to security updates for all devices for 10 years, but after 5 gets into financial difficulties. Maybe due to competitors who didn't make that expensive commitment. One quick way to dig themselves out of this hole is to push a 'security update' that drastically restricts the device's functionality e.g. prevents it from installing new apps released after a certain date. This can be indeed argued to make the device more secure, and you can argue that there's no expectation that the device will always be able to install new apps anyway, so no end-user expectations or promises have been violated. How would you stop this kind of perverse incentive?


👤 maerF0x0
IMO here's some better solutions.

1. Blend FCC action with right to repair -- Require device makers to provide software patch utilities to the public, and open source the code after a period of time.

2. Rather than regulate manufacturers, educate consumers. Companies that do dumb shit should go bankrupt because customers can understand the company sucks.

3. I'd prefer the government to defined standards and repercussions, not solutions. ie, Do not mandate security patches, instead add liability, per sold device and scaled to severity, for security flaws. Then let the market decide the solutions. Rather than giving patches, they might decide to just give free replacement devices, for example.


👤 tkfu
One thing that regulators need to be very careful about is how "security updates" are defined, and exactly what manufacturer obligations for issuing security updates should be. CVEs are a notoriously terrible representation of actual security risks, so a measure like "manufacturer must issue new releases that include any released patches for CVEs with a severity rating greater than 9" would be a clear non-starter.

There are also often practical issues related to security patching embedded devices: for example, a downstream supplier's driver can make it impossible to upgrade a kernel unless/until the supplier provides a fix. Of course, strong regulation here could help to drive bad practices like that out of the industry, but I'm not going to hold my breath on that one. The effect of regulation like this would make it harder for manufacturers who don't have the market power to lean on their suppliers to provide security patches.

Finally, it's important that any regulation that mandates or strongly encourages software updates also mandates that the update system itself be implemented in a secure way. This is my specific area of expertise, and I can tell you that it's very often done very badly. A bad update system is a gigantic, flashing red target for attack. So something like mandating signatures (and sig validation) on software update images would be a good start. Mandating the use of TUF-compliant repositories would be even better.


👤 distract8901
I think that IOT device manufacturers should be required to support their device for some minimum period of time AND be obligated to release the full source code for the device once they decide to end support. This also requires releasing the keys to any firmware signing mechanism or publishing a firmware update that removes such checks.

The core problem is that without control of the firmware, consumers don't really own these devices. The company can unilaterally decide one day to brick your device and force you to buy a new one. It should be obvious that this behavior is egregiously anti-consumer and anti-competitive.


👤 atomicfiredoll
Has much consideration been given to labeling when a third party cloud or paid service is required to use the device? As somebody who uses IoT devices "locally" on my private network, I want to know my data will stay local and protected. The recent issues with Eufy doorbells claiming to be under local control [and encrypting data], but actually sending data to the cloud stands out to me as an example where labeling and enforcement could help.

[0] https://arstechnica.com/gadgets/2022/11/eufys-no-clouds-came...


👤 PaulWaldman
There is a high degree of ignorance pertaining to cybersecurity among the general public. Stand next to the Geek Squad counter at Best Buy and you'll hear login credentials being freely exchanged all day.

Since this is "opt-in" on the part of the vendors, how will consumers be educated to care about the FCC cybersecurity label to make it worthwhile?

This also seems analogous to the USDA's Organic label.


👤 mschuster91
I can't file because I'm not based in the US, but I'd love to see smartphones, tablets and similar devices to be covered as part of IoT in general.

There are multiple issues that I think need urgent regulatory attention, and the issue classes are valid for both "classic" IoT devices and phones:

1. Manufacturers often do not state anything about support: availability of spare parts, feature updates, security updates. Even those that do, like Google's Pixel lineup, have ridiculously short times, and "enterprise" devices like my Samsung Galaxy Tab Active 3 that's 2.5 years old don't have spare screens available any more. I bought an "enterprise" device in the hope that it would have a better supply chain than consumer devices, but I was mistaken.

2. Many devices with batteries are sold without the ability to easily replace them or without officially sanctioned spare parts, which causes a risk of people running devices with swollen or otherwise damaged batteries, or devices living way shorter than they could be because batteries can and do simply lose capacity.

3. Many devices are completely locked down. This is particularly relevant for SSL root certificates whose expiry leads to devices being bricked, or for people who simply would like to enjoy the freedoms of the GPL and other FOSS licenses but can't because custom firmware can't be installed at all (due to Secure Boot) or permanently bricks features out of DRM concerns (e.g. Samsung Knox, Netflix, banking and many other apps that refuse to run on rooted or otherwise modified devices).

4. Many devices' BSPs (board support packages) are littered with ridiculously old forks of stuff like bootloaders, the Linux kernel or other userland software, and the chip/BSP vendors and manufacturers don't give a fuck about upstreaming their changes or code quality is so bad it cannot be reasonably upstreamed.


👤 realo
What about industrial IoT?

Even if a manufacturer publishes updates, the clients would likely not want to change anything, often.

If you have a plant with 1,000 units of gizmo-A and update them during a week-end and now 200 of them do not do the thing they used to do anymore... you have a big problem.

There is a genuine fear to update anything in many industries and I am not sure how this can be overcome.


👤 mensetmanusman
I’m concerned about smart lights that need to connect to Chinese servers to work being peddled on Amazon.

👤 solardev
How will the regulations keep up with evolving security best practices?

👤 superkuh
I guess the one thing I can hope you consider is that not everyone or everything is a for-profit company and you should leave open source and individual human developers out of these coercing regulations and only apply them to incorporated entities that exchange products for money.

The idea of a random human being forced to keep writing software via the threat of being sued is incompatible with human freedom and in that context would be far worse than the problem it "fixes".


👤 lacker
Regulation to require a certain period of security updates doesn't seem useful to me. It's very easy to send out a "security update" that doesn't actually improve security. You can send out an ad to all your users saying "You should upgrade now to our newest product!" and call it a security update. Requiring security updates may end up just requiring companies to spam their users with a certain amount of marketing material.

A bigger issue than the available of updates is whether security updates are automatic and mandatory, or optional for the user. If a security update requires some action on the user's part, most users won't want it.

The overall problem is that the main IoT security problem is botnets, not insecure devices per se. A botnet does not affect the owner of a device very much. Thus, the owner of a device usually prefers an insecure device, rather than taking some risk of the security update breaking the device.

I'm not sure what the FCC should do here. It seems reasonable to hold the manufacturers of devices responsible in some way when those devices are used in a botnet, but I'm not sure if that's within the FCC's scope.


👤 myself248
Speaking as someone who has several cheap cameras gathering dust in a box because I no longer trust them with network access...

...manufacturers are simply never going to be incentivized to take security seriously. The best you can hope for with a regulatory approach is to incentivize them to pay more lip-service to the idea, while hiding their backdoors better. Their incentive to spy on users is simply too profitable.

(And, the legal environment is such that users can simply click-wrap away literally anything. If the terms say you agree to let the manufacturer monitor your conversations, guess what, it's no longer spying. Perhaps any such language, in the built-in firmware OR IN ASSOCIATED APPS, should immediately make a device ineligible for a favorable label.)

Only users ourselves, actually have users' interests at heart. The only meaningful improvements in security that I've ever seen in the wild, have been with open-source firmware that completely replaces the device's own. Not a shim on top that adds functionality while preserving the OEM's backdoors, but a complete ground-up replacement.

Therefore, the most meaningful step would be to require support for open-source firmware. Providing all the data such that open-source drivers can be written, providing a working build-environment, and making it easy to install user-provided firmware, would go a long way.

Then once the device is fully supported in the mainline distro of a mainstream FOSS project, its label could indicate that support may extend beyond the manufacturer's whims or even existence. And since the label wants to be affixed at time of sale, the incentive is on the manufacturer to get this support work done before they even ship.

Also, require a meaningful cybersecurity response. That is, they have a disclosure contact, they work with researchers to fix vulnerabilities under standard timelines, they pay bounties that make it worth researchers' time rather than incentivizing them to sell their vulns, and they check related products for similar vulns rather than playing perpetual whack-a-mole.


👤 btilly
What is the policy on manufacturer installed backdoors on commodity hardware?

See https://arstechnica.com/information-technology/2016/11/chine... for an example.

Attempting to rule on this might involve conflict with the NSA, which has a decades long history of getting backdoors into commercial tech. https://www.reuters.com/article/us-usa-security-congress-ins... says a bit about this. But a lot of our tech is built in China. They're almost certainly doing the same thing, and history shows that those backdoors get discovered then become a source of security holes for the rest of us.


👤 phkahler
>> If they meet certain criteria for the security of their product, manufacturers can put an FCC cybersecurity label on it. I fought hard for one of these criteria to be the disclosure of how long the product will receive security updates.

I think labeling may be a good idea. Requiring updates is probably not a great idea. For most of my things I prefer they not auto-update, as that invites another whole world of problems.


👤 zamalek
I added a comment voicing support. I also added concerns about only being able to update devices through specific ecosystems. I use Home Assistant at home, and many devices will only update through the likes of Google Nest or Alexa - rendering them unsupported on day 1. I was lucky enough to know not to purchase devices that have this issue, but many home owners could find this out the hard way. Firmware should be available from a publicly accessible location.

👤 samstave
>Many manufacturers oppose making any commitments about security updates, even voluntary ones. These manufacturers are heavily engaged at the FCC and represented by sophisticated regulatory lawyers.

Gee, I wonder if this has anything to do with the corrupt people like Ajit Pai?

Sincerely - after the Ajit Pai debacle and the fraudulent selling off bandwidth rights, I literally dont trust the FCC with anything...

So, prove to me the FCC actually understands IoT?

Basically youre attempting to regulate a swarm of GNATs and how they should behave, without understanding how they are born...

look at Moores Law as it pertains to surveillance and fraud tech ; The size of cameras, the power of IP enabled wireless talking devices (regardless of protocol/tech/mech) is so incredibly small and easily, cheaply replicated in any cheap-tech-state (china, and many other places we dont talk about) - makes it such that one can only assume that they are under 100% surveillance 100% of the time...

THAT is what you should be regulating - the scanning of an area for detection of IoT device transmissions

Think of Purple (brand) air quality monitors, a frequency field monitor for all RF transmissions in a given area (as can be heard) would be great, with sensors for adding to such monitoring that can RSSI triangulate the location of devices.

Think of Apple Air-Tags...

If you had cell phones all reporting RF triangulation signals - you could map out the IoT problem, frequencies, locations, targeted devices/tech etc...


👤 sokoloff
[delayed]

👤 evanwolf
Thanks for this, Nathan. Orphaned devices pose a suite of security problems. They outlast the companies that sell them, the companies that make them, the upstream suppliers of hardware and software, the companies that service and repair them. Smart building devices and power systems can run for decades. Implanted medical devices, home health devices, and hospital systems persist longer than five years and can outlast the corporations behind them.

Please address orphaned products so that security continues with a duty by the maker to sustain safety and security beyond the life of a product or its manufacturer. This is like the requiring a sale-time deposit into an independent fund to reclaim/recycle a product's waste.

Beyond the current proposal, you might require a device's IP to be put in escrow in the event of product or corporate end-of-life, allowing customers or third-parties to take up maintenance and security. (#RightToRepair #EoL)


👤 vinay_ys
IoT devices need regulatory standardization w.r.t a few things:

1. software stack – big fat "firmware" should not exist. Entire stack should be upgradable safely, securely and frequently during its official supported lifetime and should be open-sourced for owner's own upgrades past end of life. For this, the hardware stack needs some amount of standards compliance.

2. Vendor should clearly declare/advertise the period for which they will support the device. During this period the device vulnerabilities should make them liable. After this period, they should mandatorily open-source the device drivers and unlock the boot loaders to enable free software alternatives to work on them. For certain class of devices, there should be mandatory minimum period of support.

3. Networking capabilities should be legally standardized and verified/certified before being released to market and checked for continued compliance and fined if out of compliance.

3.1 Use of latest mainstream TLS with valid certificates should be mandated for all communication.

3.2 If there is outbound communication from the device, it should make it clear to which domains it will communicate with so that it is easy to allow only that through firewall and keep everything else locked.

3.3 IoT should not accept inbound communication without authentication.

3.4 Follow best practices w.r.t rollback resistant cryptographically verified secure and brick-safe software upgrades.


👤 dtaht
Thank you for engaging with the community in this way. Many years ago, in a fight to preserve individuals ability to flash their own routers, Vint Cerf, and I, and a coalition of many others, filed this report:

http://www.taht.net/~d/fcc_saner_software_practices.pdf

(retaining the ability to reflash our own routers, allowed my research project to continue, and the resulting algorithm, fq_codel (rfc8290), now runs on a few billion devices) The Linux and OpenWrt development process continues innovating and is very responsive to bugs and CVEs. It is a constant irritation that many products exist downstream from that that are 5 or more years out of date, and not maintained!

Key bullets from that fcc filing are on page 12-13.


👤 dantheman
Does the FCC have any authority over IOT devices? and if so how?

This seems like a massive overreach, how can a body designed to for managing shared spectrum have any authority on devices that use the internet?


👤 carreau
Thanks for asking those question, and with the number of comments, thanks if you reach mine.

One of my hope is that this will affect the market and in particular the ability to have non-connected variants of some appliances.

My hope is that if the cost/risk if high enough, manufacturer won't put pointless connectivity - or at least the ability to disable connectivity – to some models.

I'm also hopping that will put an end of application that collect personal data, like my headphone app requiring I turn on GPS and give it access to my location start.

Thanks !


👤 koliber
The issue seems to be wider than just abandoning support on IoT. IoT device abandonment is a real problem, but you can aim higher. Software written for hardware is generally of poor quality.

There is an insightful HackNews thread from three days ago about the subject. I hope it will add some more insight into the scope of the issue: https://news.ycombinator.com/item?id=37352970

My 2 cents would be: treat software defects like hardware defects. Pass legislation to force manufacturers to provide a warranty for at least 2 years. Pass rules which favor the consumer ruthlessly. Button was not constrasty enough - refund. Text label caused the user to misinterpret the action - refund.

Perhaps the additional accountability during the early days of a device will have a positive impact on the longer-term longevity. If prices are forced to go up a bit in order to provide better support, there will be more money for higher-quality software.

Don't limit this to IoT devices. Manufacturers will find ways to skirt whatever way "IoT" gets defined. Make whatever rules you create apply very broadly to all devices with embedded software.


👤 chromoblob
Strange that in such a popular country this is so late.

> Defining the Internet of things as "simply the point in time when more 'things or objects' were connected to the Internet than people", Cisco Systems estimated that the IoT was "born" between 2008 and 2009, with the things/people ratio growing from 0.08 in 2003 to 1.84 in 2010.[29]

(https://en.wikipedia.org/wiki/Internet_of_things)


👤 asynchronous
Really thrilled to see someone in a position of influence turn to the HN community for comments on public policy. We could use more stuff like this.

👤 Dowwie
This economy seems to consist largely of cheaply made products with high profit margins. You're attacking profit margins, so good luck with that. Sorry for the pessimism but this country just derailed the FTC attempt to abolish non-competes because it would cost law firms business.

👤 tsegratis
One suggestion for medical devices -- and in general any situation where the consumer cannot asses what they're buying, but is important is mandated ratings, such as for tyres https://www.goodyear.eu/en_gb/consumer/learn/eu-tire-label-e...

Since security (medical safety etc etc) are hard to measure and therefore hard to enforce, the labels help everybody, because they encourage and measure best practice, rather than the unmeasurable, allowing sellers to advertise and demonstrate on that basis -- so helping everyone


👤 cuttothechase
All Internet supported devices should explicitly provide documentation about their functional behavior when there is no internet.

A few examples-

- internet enabled lights should say if they light up if no internet is available.

- internet enabled exercise equipment should say what is operational and what is not when there is no internet.

- automobiles should provide thorough documentation about what does not work and what does work when there is no internet


👤 ChrisCinelli
Regulation seldom proves to be the ultimate remedy.

The true catalyst for change lies in cultivating informed consumers who wield their purchasing power to support manufacturers committed to security.

If a law has to be put in place, ensure that every marketing advertisement and article concerning a device explicitly states the duration for which it will receive regular updates.


👤 fnordpiglet
Hi thanks for the work here. I read through (some) of the linked materials including the statements. The proposal itself is enormous, and all of it is extremely well researched. (Reading the comments here it reads like few folks read your links as most of the comments are addressed in some way).

That that end, and I realize part of these exercises is exhaustiveness, due to the legal and regulatory nature, it would be really useful if there were a TLDR version that included the request for comments boiled down to a sentence and laid out concisely. The document is enormous and unless it were literally my job (aka a paid lawyer or lobbyist) I couldn’t justify going through it all and composing responses point by point.

The points however are great and we should respond - for instance, the question of should the label be at a product level or a device level (I.e., subsystem of a product) is great. IMO it should be at a product level. Currently device level labeling ends up just being a blob of perfunctory tiny text. Products are what we interface with, and if any updates would be applied, they would be applied at a product level anyway.

Further, to the points made on energy star labeling in the statement made by your peer, I think the labeling should be simple - like a small discrete set of classes for compliance that can be extended over time with further rules. So 20 years security updates is “platinum” 10 years is “gold” 5 is “silver” or something. Then the classes of label can accrete meaning over time as you enhance your proposals.

I also wonder if the formal comment system is the right interface for this community… a few might convert all the way to a comment, but it’s not a trivial undertaking to read all the material and provide detailed commentary. I know it’s what you’ve got and what you have to work with, but in some ways a way to work best is right here in the HN comments and then lifting material up into your direct work via the proposal and statement. To that end maybe reaching out earlier in the process to get feedback would work?

Regardless I am glad to see our government proactively reaching out to adhoc communities of experts to solicit our feedback. Thank you, you are obviously one of the good eggs. I’ll bookmark your links and try to spend some time drafting a comment.


👤 justinzollars
I think our system has become Byzantine. The term comes from the Byzantine Empire, whose code of laws grew with time. Over hundreds of years the society was mired in complexity.

There are too many rules and regulations. The best thing you could do within your role is to advocate for rolling back and eliminating existing regulations to simplify business.


👤 2OEH8eoCRo0
If a known vulnerability in an IoT device is exploited for harm then the device manufacturer should be liable.

Right now there is no penalty for firing and forgetting half-baked IoT products onto the market.


👤 jcampbell1
I honestly don’t think we need a government solution to this issue. Consumers who want this security can buy from manufacturers with a reputation at stake such as Amazon or Apple. I don’t want your “help”. Let the market sort it out.

👤 leeter
First and foremost I applaud the effort. I think this is a worthwhile concept. However, I think this is something that would be better if the FCC delivered a report asking for specific things to congress. Because while we can look at this just through the IOT lens I think that's very shortsighted. There are CNC shops still running DOS and Windows95. So what happens when the new fancy CNC they are buying today running Windows 11 embedded goes out of support from MS? These are things that are intended to be in use for 30+ years.

So to me there needs to be a formal process in the law for dealing with abandonment, and minimum support duration.

1. Minimum support duration: This needs to be in federal law and enforced by private right of action. This cannot be something that the FCC or FTC must enforce. This must also be binding with valid legal remedies if the company fails to comply or no longer exists. Which leads me to my next point.

2. Abandonment: The law must require that if an OEM abandons a device that sufficient information (including PCB schematics and PCB BOMs!) are made public that both individuals and third party commercial operations can supply support. Again, this must have legal remedy to enforce. My preferred remedy if a company completely fails to provide anything is loss of copyrights to that specific software. Thus negating all the digital locks provisions of the DMCA in regards to that specific hardware. If the company provides the necessary information under an appropriate free software and/or documentation license then they maintain all copyrights. They only need submit the information to the library of congress and a third party such as archive.org; they do not need to host it themselves. Nor would they be required to provide any support after that point. Furthermore, any components not currently in production would have to be made available until such stocks are depleted.

Having a formal abandonment process for a device including a formal notice of termination of support, and releasing appropriate documentation and necessary information to the public provides a massive boost to both ongoing security but also to reducing waste.


👤 joeframbach
Here are the concerns I care about at this time:

1. If Device Attestation/WEI takes off, and if it takes this new time-limited support lifetime into its attestation signature, then we'll see more physically healthy devices sent to the landfill. Devices that are reasonably fine for grandpa to play Mahjong on, but Google won't let her visit Facebook because WEI can't guarantee them their ad dollars, then it'll go to the landfill. I don't want to see this waste. I want the "support lifecycle" of a product to be the FULL lifecycle, including mandating a recycling program or something of the sort.

2. I want guarantees that I can keep using an old device after its support window closes. When Google decides to sunset the Nest Doorbell, I want to own it and be able to run it myself.


👤 kapilvt
there are many parts of this problem, I have particular thoughts on two.

one concrete recommendation I would make is mandatory third party pen tests. software companies have to do this for soc2, etc. Companies putting live mics in living rooms across the country should deal with the same. This is all to raising the level of initial security on devices, including the update process.

the other consideration is about updates, and its much more nuanced against what's viable for a business. there is no security without updates, but the ability to produce those on any schedule is unclear. even more so when the originating company goes out of business. Ideally there would be a threat matrix here against a CVE list (remote access to hot mic/camera), would require a manufacturer to issue an update within x days.


👤 jjguy
For those of you unfamiliar with the specific challenges IoT patching brings, here is a blog post from just last week on one aspect of the topic: http://tomalrichblog.blogspot.com/2023/08/british-cuisine-de...

FTA:

> I assumed that device manufacturers update the software in their device about every month...he said they do it annually.

Those devices are at least _getting_ updates - there is a long tail of devices whose operational lifecycle [far] exceeds the vendor's support timeframe - in other words, they don't get patches at all N months after release.

The solution to these problems is straightforward - we've been managing it in software for a long time. EOL OSes, Long Term Support (LTS) OS releases, etc - but the device manufacturers are not as mature, and have not been making natural progress to do so.

And since this is HN - there is a startup hidden in the midst of all of this: an enterprise-grade IoT OS that "does security right." Sell to the device manufacturers, allow them to market it as "enterprise-ready" or some such. If the FCC guidelines here are approved, there will be a suddenly increased demand!


👤 w10-1
Neither regulation nor market mechanisms can really address the problem of device security:

There's virtually no overlap between the transactions of (1) purchasing the device for use, and (2) maintaining the device for security.

All the transaction features differ: different parties, different interests, different type of transactions, different risks. That's a recipe for exporting costs that no market mechanism or regulatory scheme can fix.

The best way to handle these is to have official succession plans: the manufacturer needs to delegate to a support organization for every product, and every product in use needs a way to indicate if it is up-to-date with support. The FCC maintains the database of support organizations for every FCC-certified device.

Everything can flow from that, from current to future legal and market contexts.

- Support organizations can take on devices. (Using open-source would be a particularly effective approach.)

- Ordinary negligence can attach to users without support or support organizations who fail to address a risk they know of.

- Support can be separately regulated

- Because support organization (like insurance companies) are taking on the risk, they will discipline manufacturers, raising the cost of producing unsupportable devices.

- Effective manufacturers might elect to internalize support (leveraging confidential information) or focus on manufacturing per design.

- Support organizations may start contracting manufacturers by design, to reduce overall costs considering the entire lifecycle.

Politically, I believe manufacturers seeking to avoid regulation would accept regulation if they have the alternative of offloading it to support organizations. Those organizations would welcome regulation as part of their moat. Large device users would welcome support organizations who can supply the service they need, and support can extend their expertise into consumer markets. Cost/price and the payer would track the value and cover the entire lifecycle.


👤 jedberg
What is the appetite for requiring the firmware/software source to go into escrow, such that if the company goes out of business or stops supporting the hardware, the software becomes open source and public domain?

This would incentivize companies to provide updates but also allow the community to take over if the company folds.


👤 winstonprivacy
Sorry, but this is a terrible idea that is going to stifle innovation and make it much harder for startups and small companies to compete. The government simply doesn't need to get involved in this. There is already an incredibly robust ecosystem already in place which shames manufacturers who drop the ball when it comes to security.

More government is rarely the answer and especially so in this case.


👤 tschellenbach
I think the regulation should classify products differently. Something that records audio and video should require a high level of security and support. Something that wirelessly controls my sprinklers maybe not as much.

👤 _whiteCaps_
This is off-topic but while you're here, the amateur radio regulations regarding baud / symbol rate really need to be removed and replaced with a 2.8kHz bandwidth limit.

See Section 97.307(f).


👤 Jiro
What's to prevent companies from saying "our profit is more important" and not putting on the label?

👤 earthboundkid
This is a great idea! Just last week my wife had to connect our oven to the wifi to use an app to control it, and I was thinking about how there's basically no guarantee the oven won't just become a botnet on our internal network eventually. It would be great to get some legal requirements on this stuff.

👤 6510
I've always thought that when a product is abandoned it should be left in a usable state. Keys or firmware to open up the device could be stored with the proper authority.

👤 baby_wipe
Please do not propose this regulation. If consumers actually cared about their IoT devices receiving security updates, companies would be doing it. The fact that companies are not already doing this is evidence it's not important to consumers. People may express frustration, but their purchasing behavior speaks louder than their words.

This regulation would force companies to work on things that customers don't actually value. It will hinder innovation. Companies could work on features consumers value instead of working on security updates that consumers do not value.

If this regulation passes, companies will be less likely to offer new IoT devices knowing they will have to provide security updates beyond what consumers are demanding.

This regulation will also increase costs for IoT devices. As a consumer, I do not want the FCC mandating what features will be included in my IoT devices.

From the perspective of an individual engineer, tech regulation like this often leads to engineers doing soul-sucking work that nobody cares about. I know your focus is on consumer protection, not producers, so that point may be irrelevant.

Please do not be the individual that causes a negative impact on the world, despite whatever good intentions you may have.

I'm guessing if the FCC enacts this regulation, it will help you in your political career. However, if you were to take the opposite stance and oppose the legislation for the reasons stated above, I'm sure you'd lose your job very quickly. Therefore, I am confident I will be ignored.


👤 CodeWriter23
With all due respect, I think the market should address this. Like UL Approval from Underwriters Laboratires, players in the market can submit their products to an organization that vets their security and sets standards about updates, product lifetimes, security incident response time commitments etc to obtain their seal of approval. Perhaps the seal has grade levels to indicate the vendor's commitment to security.

This simultaneously enables innovation by small players while providing a pathway for bigger players to put a meaningful trust signal on their packaging and advertising.


👤 ilc
Maybe we need to approach it differently?

There will always be an "End of Life" date. And there will always be a user using the product beyond it.

So my question is: How do we make it safe?

My first thought is a "deadman's switch". If a device doesn't get or see some form of a signal, it just stops updating and disables IOT features. If the user wishes it to come alive again, there's a button they can press to have it "Check for updates" if there are none, it tells the user it is at "End of Software Updates" and "Certain services will be disabled." etc.

We can't stop the issue. We can decide what to do once a vendor decides to stop updating... and make sure that final revision is as safe as it can be. Preferably non-networked (including bluetooth etc).


👤 SkyMarshal
There’s a concept in the Linux ecosystem called Long Term Support (LTS) in which a Linux distro will label a new release as LTS which means they will support it with security updates for several years. IoT should probably have something similar.

👤 wolverine876
I see the strong encouragement to post to the FCC's public comments, and you say it is highly influential - more influential than what you can do as a Commissioner.

I'm a well-informed, active citizen, and I didn't realize that. It might help to explain how that works - how do public comments influence things? My concern would have been that it depends on the FCC, and that comments could be included or ignored as desired, and probably the latter. (I want to emphasize: That would have been my concern had I not read this thread - I trust they are influential).

Thank you!


👤 pylua
IOT security is funny. Secure ways of deploying things exists today, but retrofitting existing systems would be impossible if the devices don’t support it ? Also, updating a running system… seems like a very costly and time consuming exercise.

👤 natch
I went to an EFF event where there was a guy from Sling Media (Slingbox, remember them?) who said the one point that overrides all others in importance, which he learned from EFF, is that updates should never be force pushed.

Designing a device to accept force pushed updates opens non-addressable security holes by giving a mechanism that will allow political players, acquiring companies, or pretty much anyone with an angle, to use the legal system to exert any control and conduct any abuse they can get away with.


👤 blinkysc
I think we also need to have mandatory access to devices for our own monitoring solutions

👤 slt2021
Data collection is another point - customers need to know if their personal data, video streams are being collected and stores somewhere?

like video camera streams, voice audio, images, etc - are they being used to train AI models for some object recognition of some sort?


👤 alexfromapex
This is a great idea. The Philips Hue hub situation is a great case study that many are probably familiar with where the support ended for the first version of the IoT hub much sooner than many consumers were expecting. It's like a more acute and malicious form of planned obsolescence.

👤 indymike
I'd be really happy with products having to be labeled with:

1. Final date the manufacturer will provide firmware updates & security updates 2. If the manufacturer will support open source alternative firmware & security updates. 3. If there are any subscription fees (and how much) to get firmware updates & security updates.

Issue #1 is a big deal: I've purchased equipment, new in the box, after the mfg had discontinued support. I've also ran into issues with devices where updates required a subscription. I'd love a little disclosure.


👤 tptacek
Hey! Welcome back, Marco (he was a pretty active HN'er back in the day).

👤 blobbers
Tbh, I think this will add unnnecessary regulatory burden for start-up companies.

Take for example general IoT cloud connected equipment: it will get security upgrades more often than one that is completely offline. That was a big selling point of the Meraki cloud offering that is now part of Cisco. There would be millions of unpatched networking equipment, but Meraki could force upgrades onto networks without them managing the patching. The result being they would have secure products, and the non-cloud providers would have to rely on their customers to update their phone. The meraki solution was by definition a better upgrade path than the standalone solution. Why would you burden them with regulatory hoops that non-cloud devices do not require?

When you buy an IoT device, you're taking on a risk on anything not open source, and betting that a product/company will succeed. For example, I bought some cube sensors to monitor my home air quality. These sensors are now garbage because they connect to a closed cloud that has been shut down. They don't even function let alone get security upgrades.

If anything, non-cloud connected IoT devices are more likely to cause problems.


👤 freedude
Updates and the updating process are fraught with problems. Adding a requirement for updates to occur can compound those problems.

Some updates deprecate code. If that code is needed for critical functionality the update renders the device useless for that application.

Updates require some level of connectivity. It is becoming increasingly more common to insulate an IoT device from the Internet. Either through a VPN, an internal firewalled network, or completely disconnected. This is at odds with an IoT "requirement" to update equipment.

Updates require downtime. When do updates occur? How often? Can they be scheduled? If I have several IoT devices from the same manufacturer can I aggregate updates with an intermediate software package which then controls deployment?

How are updates tested? Are the original requirements from the initial manufacturing process kept or does this change over time and result in a broken IoT device.


👤 outwit
A regulation that would force companies to release source code in case they go out of business. A form of escrow maybe? I have several paperweights that used to be "smart". And several others that still work but will never get another update.

👤 Plasmoid
This boils down the Right to Repair and Maintain.

I always advise my friends and relatives to NOT buy smart appliances. The doom scenario is buying a $20k furnace that becomes useless. Imagine a scenario where you need an app (that's never updated) to adjust your house temperature. Or requiring people run insecure wireless protocols to control it.

Appliances like this need to operate over an open control protocol, where the smart/internet bit is physically replaceable. Dropping $200 every 5-7 years for a new Furnace smart attachment is reasonable price, dropping $20k is not.


👤 blinkysc
I think it should be mandatory that we have access to these devices with SSH or something to be able and retrieve logs or possibly create fixes ourselves

👤 takinola
It seems to me that a law based on technical specifications is going to be hard to define and harder still to enforce. Perhaps, it may be easier to employ market forces to incentivize the manufacturers to secure (and continue to secure) their devices.

For instance, can there be a remediation fund that manufacturers pay into to compensate/support users for privacy or security breaches involving their devices. The amount they pay would be based on the number of the devices involved and the severity of the issue (PII, loss of amenity, network nuisance, etc). The rate paid could also be reviewed periodically to ensure the amount is not too low or too high. This way, companies can put a literal price on security and factor it into their product strategy and planning.

The advantage of this approach is two-fold. It internalizes the cost of security so it is no longer an externality that is be foisted on consumers. It also allows companies to apply their innovation towards addressing the issue without hamstringing them into a regulation prescribed approach that may become outdated over time.


👤 sezycei
It is extremely disheartening to see one of our many bureaucrats come to an anoymous international forum for input on an American domestic policy issue. A domestic policy issue that should be exclusively taken care of by the elected legislature rather than an unelected bureaucracy.

I hope that all of the comments in this thread are fully discarded by all who hold actual power at the FCC; the opinions of the international community are not relevant.


👤 nanolith
To add to previous similar comments, I think that one of the best ways to ensure that security updates are provided is to ensure that manufacturers either commit to continuous security updates, or after a minimum sunset period during which they provide security updates (e.g. 5 years), they agree to provide source code as well as build and deployment instructions, so that the community can take over. It must be possible to build the source code using a freely available toolchain. Furthermore, they must agree to provide links to these communities through their support pages for these products, so that users can be made aware of new third party firmware.

A durable IoT device could last decades, but few companies building these products will survive as long as the devices, let alone support a device they are no longer profiting from. As long as they are supporting the device with security updates, it's fine for the firmware to be proprietary. But, when they decide to cut support for the device, they should be willing to ensure that consumers who have purchased this hardware and are still using it won't become victims, and that the overall Internet community won't end up harboring botnets made of living dead ewaste.


👤 __sy__
Nathan -- thanks for your work on this issue. I'm the ceo/co-founder at Seam (YC S20). We're building an API for IoT devices. I have many, many thoughts for you.

For Seam, we purchase, set up, and test many individual devices and systems in our lab in San Francisco. During the course of this work, we discover quite a few interesting things. When possible, we work directly with manufacturers on addressing the more concerning problems we find. We maintain an internal device database (partially available here https://www.seam.co/supported-devices-and-systems) where we keep track of our findings on devices we test & integrate. One area that I haven't seen addressed here is data-storage jurisdiction. imho, that might be one of the more concerning aspect.

happy to have a chat; my seam email is in my hn profile.


👤 smokel
Interesting turn of events to ask Hacker News for their opinion :)

Someone I know wrote an interesting opinion piece on this [1], which might be relevant to this discussion.

[1] https://bits-chips.nl/artikel/iot-we-need-to-get-a-grip-on-t...


👤 ghastmaster
This is what civil courts are for. Otherwise let the market determine which manufacturers or products are reliable. This is nonsesne.

👤 mgulick
Computer security is hard, and I think a "security label" would give a false sense of safety. Requiring manufacturers to respond to critical security vulnerabilities for a given period of time sounds like a good idea, but such rules often have unintended side-effects (like impacting startups, who maybe couldn't afford the certification or can't guarantee long term support). What we really need is local-only device access, so that I can firewall a device off completely from the internet, and still make full use of it with a local controller like home assistant. Locking down devices with the threat of DMCA violations to reverse-engineers actively reduces device security, and takes away my ability to fix devices myself.

This overall strikes me as much lower priority than the currently ongoing ATSC 3.0 DRM doom. Please please please do something about this nightmare that broadcasters are imposing on the public. Don't let broadcasters take away my ability to watch live TV without an internet connection (resulting in a complete emergency broadcast system failure?). Don't let broadcasters take away my ability to record/time-shift live TV using software-based DVRs (e.g. Plex, Jellyfin), which could never possibly meet the "Nextgen TV" certification requirements!


👤 IAmPym
I can't see this changing much if they water it down to a point where liability isn't an issue. If a company is weighing whether to put resources into a security update vs. roll the dice on whether they are used as an attack vector, where they will be sued and just declare bankruptcy, I don't think it would be effective.

Doubly so if their products are still in use after the company goes under. Now who is liable? What protections can one create for this scenario?

Most consumers will never file a lawsuit against such behavior because it is an incredibly uphill battle.

I get the sense that regulation like this will just create a new goalpost that won't ultimately help consumers, we've seen it happen time and time again, but I don't have a better idea. I suspect if you tried to enact real change you'd get too much opposition.

Tricky situation.


👤 unintendedcons
The most important thing is what happens after vendors fail and disappear - the code and keys and reproducible build system must be released so people can fix their things.

A standard and an org for keeping those things in escrow, please!


👤 iandanforth
I think the most valuable security feature for IoT devices is being able to work without contact with a central service.

If the value of a device is tied to opening a connection to and occasionally retrieving code from a third party it is inherently insecure. All I have to do is buy the company that owns the central server (or compromise it in some other less visible way) and I now have the ability to introduce malicious code to all devices that are receiving 'security updates.' You won't be able to make a rule to prevent asset transfer (correct me if I'm wrong) so you won't be able to close this hole. And this assumes the manufacturer isn't malicious in the first place.

For people to be able to protect themselves and to protect the value of the property they have purchased (e.g. the company tanks and the central service is lost) a rule should exist mandating minimum useful functionality in a disconnected and/or self-managed environment.


👤 ryukoposting
As a firmware engineer, I'm one of the people who actually writes the code that goes inside the IoT devices. I'm very interested in what the FCC might be able to do here.

How does the FCC define a security flaw? Would updates only be distributed when there is a flaw that needs fixing?

Remote update mechanisms can themselves present security problems in some domains. Thus, some devices should only be updatable if the owner has physical access to the device. Will the manufacturer be liable for damages caused by attacks on vulnerable devices that were not sufficiently updated by their owners?

IoT is making its way into defense and enterprise environments where reliability is a matter of national security. An update nearly always results in some downtime for the device, even if it's just a couple seconds. Sometimes, it may be in the best interest of a device's owner to defer an update indefinitely, until that device's continuous operation is no longer mission-critical. Even if the owner can't control exactly what is in an update, they absolutely MUST be able to control when an update occurs.


👤 avsteele
I think this is a bad idea which will lead to fewer and more expensive devices. I do not want the FCC regulating this.

It is much more reasonable to have the market impose some discipline on manufacturers and their level of support. Plenty of consumers would favor less costly, less-supported devices.


👤 projectazorian
Although I don’t have much to add on the specific topic here, I wanted to applaud you for coming to this community for consultation. If we want to see better regulation of our industry, this is exactly the sort of thing we need to see more of. (As opposed to dusty formal public comment processes easily gamed by rent-seekers.)

👤 b20000
I don’t understand how cybersecurity falls within the responsibility of the FCC. I thought the FCC dealt with protecting the radio frequency spectrum.

👤 b20000
Are you familiar with the saying that if you make something secure nature will create a better fool?

👤 b20000
How is a small business going to be able to pay for this? This will be something that big tech can do but not startups that are bootstrapped.

👤 delfinom
Here's a thought. Your proposal is really to create an FCC Cybersecurity label, but why does it matter?

For a long time in the US since the 1960s, we had the private industry UL and their trademarked seal they enforced. The problem is over time, the industry created ETL/Intertek. They claim to be another standards testing body like UL but their actual standards are quite loose and verification looser. Now in 2023? Nobody cares anymore. You have Amazon sell literal fire hazards for electrical equipment that no brick and mortar retailer in their right mind will sell.


👤 fuddle
Marco Peraza, Not related to regulation of IoT security updates but since this is an AMA. How did you transition from software engineer into a cybersecurity lawyer?

👤 meltyness
When are you going to rip out the Lazy Susan that manufacturers use to EoL equipment on an unbelievably aggressive timeline and force governments, agencies, and state universities to throw away perfectly good equipment that is no longer "supported"?

👤 geuis
Your solution is a sticker? Seriously?

It does absolutely nothing. It will simply be ignored. At most it will be an excuse for manufacturers to increase the price of a product simply because it has some "FCC APPROVED" sticker, while not costing any more to manufacture.


👤 jacquesm
- Isolate security updates from feature updates and allow those to be ignored without imperiling future security updates

- Require that products that are no longer supported with security updates have their firmware and build tool chains open sourced, even better would be a required escrow of the full build toolset + source for every version released to the public, with automatic release once certain conditions are met

- Require that manufacturers maintain documentation and build tool chains for up to a decade after the last item has left the factory

- Standardize update protocols


👤 mey
This may have some overlap/coordination with the CPSC and FTC laws regarding truth in advertising.

An ideal world would be that products marketed towards consumers would have labeling on the box, up front, of an supported until date. With a government definition of what that "support" means. Something like Google's Chromebook's Auto Update Policy (except properly displayed/communicated on the box/device) would be a good start. Nothing preventing the manufacturing from extending that date later, but it puts a line in the sand for the consumer to make a choice up front.


👤 nneonneo
If I understand correctly, the labeling will be voluntary. So, I would guess that one of the challenges here is balancing the strictness of the requirements vs. the burden to manufacturers, i.e., it can’t be too hard to implement the commitments or nobody will label their products. What are other balancing considerations that you are having to consider? This could help figure out where we can “tip the scales” in comments.

👤 cannedbeets
The single biggest problem with IoT devices is the black box, vendor-specific cloud platform nature. This causes privacy issues galore as well as requiring every manufacturer to reinvent the wheel to secure their devices, while also making huge quantities of ewaste when Random Manufacturer #484 goes out of business, taking their cloud with them.

How about instead, mandating that all IoT devices need to comply with an open standard? Customers would be free to connect their device to Siri or Alexa if they wanted, but by default the device just works with an open standard that you can control fully, hosted at home if desired.

It would also remove the cloud security onus from the manufacturer—they would fund the standards org, which would be responsible for the security of the interface.

We already have this concept for electricity, phones, networking. You don’t buy a “MA Bell” phone anymore or an “Edison-compatible” fan.


👤 dtaht
This is a wonderful thread, and I am so glad to see so many sharing my nightmares and some of my conclusions. I encourage folk to work together on actually filing proposals with the FCC in their format. I would gladly join on such an effort, but am too busy to lead such an effort.

👤 TrueDuality
I strongly suspect that regulation at the IoT product level will have a very small practical impact because I think its largely targeting the wrong issue. The vast majority of the vulnerabilities aren't coming from the manufacturer, many of them are making relatively small changes to a reference design provided by a company like Broadcom (which is notorious for exactly the behavior I'm about to describe).

The reference design problem is an issue where a manufacturer like Broadcom creates a specialized chip. To use this chip they create a "reference driver" for it, package it in a custom firmware, then will never update that reference software. I've worked building internet routers for homes and small business and there are pieces of software we couldn't touch because they had been modified and only the fully compiled version is provided.

Broadcom passes the buck by calling it a reference design and washing their hands of it. Some upstreams do provide the source, but it's the complete source, not just the changes they made and usually without any specific reference to what the specific version they based their changes on was. Trying to tease specific changes from the Linux kernel's raw source code is quite the needle in the haystack problem.

I'm not sure how a lot of device manufacturers _could_ handle this. They tend to have very small development teams that are more electrical engineers than software engineers and usually their only directive is to make it work under an extraordinarily tight deadlines. Maybe part of the answer is they need to hire more to be more responsible... But even with experienced developers _every single hardware manufacturer_ is going to have to repeat the security fixes that companies like Broadcom refuse to fix.

I don't even know where to begin proposing a legal foundation for reference design software. I do think if the penalties and pain were strict enough at this level it would lead to a different shortcut that would be much more beneficial to the world... If Broadcom and other companies doing this kind of malicious apathy were forced to keep their reference designs up to date, my money would be that they stop doing it entirely and instead get those driver merged into the Linux kernel proper where it can be properly maintained and updated by the legion of developers that care.

The act of getting that code into the kernel would force them to improve the code and not take the shortcuts that cause so many headaches because the kernel developers gate the quality of code they produce.


👤 af3d
One solution not being discussed much: formal verification. Any given "critical" IoT software component ideally should be subject to rigorous mathematical inspection in order to verify its logical integrity. Upon passing, it could then be signed with an FCC-issued key (along with the understanding that no unsigned software be installed during updates). While there are indeed plenty of theorem provers to be found in the wild, most are likely too domain-specific to solve the problem right out of the box. So the solution is hardly trivial, but it would nonetheless be a worthy (if daunting) undertaking.

👤 askIoT
Been in the IoT space for over 10 years and this is much needed. In my experience, Cellular based IoT devices are inherently more secure than non cellular devices due to the network security and the certifications they go through. Of course, there are exceptions to this rule. Would be interested to learn more about the proposed regulations while not impacting the pace of innovation.

👤 arkitectual
I really appreciate you directly going to the community for feedback.

As someone who writes software for IoT devices and has worked in the past on security in the IoT space this is sorely needed. By far the biggest issue in my view is that manufacturers are not motivated to take device security seriously since they are largely isolated from any fallout. Device manufacturers already have to pass certification for RF emissions and safety among other things and should have to pass certification for at least a basic security audit on the device and the services the device connects to. Even self-certification would improve the current situation.

For many device types there exists some form of open source OTA update software or a commercial offering. In the last few years there has been significant maturing of the tooling in this space but the security aspect is often left as optional even though the tooling often makes it fairly easy to add. At this point I think the industry just needs a little push to make secure OTA updates the standard.


👤 omniglottal
Fundamentally, IoT security will increase in proportion to repairability and our. ability to modify/reflash IoT devices. No other change will have as great of an effect. If we (the people) cannot audit a thing, its obscurity will harbor vast troves of insecurities.

👤 notinthegovt
great idea. is my name and address submitted on the form, Express or Standard, make public in anyway? Potential political implications down the road…

👤 dang
All: To read the entire thread, you'll need to click More at the bottom of the page, or like this:

https://news.ycombinator.com/item?id=37392676&p=2


👤 1vuio0pswjnm7
Please consider requiring making source code available instead of "updates". If the vendor will no longer "support" a product, then the vendor should let the buyer support it themselves and release the source code. In practice, what we know can happen is that buyers may share their support solutions and actually support each other. The story of Cisco's WRT54G router is an example of what's possible.

If security is truly a concern, then all IoT devices connecting to the internet should be routed through a computer that the owner fully controls, which is likely to be running OpenWRT. Any tactics used by IoT vendors to evade traffic monitoring by the computer owner, e.g., discouraging self-signed TLS certificates, should be prohibited.


👤 toasted-subs
Would there be a portal where people can report potentially hacked IoT devices?

👤 1-6
The answer is: Standards. Linux core with LTS (Long-term support). Offer that option and don't get involved for the rest. Let the consumer decide.

👤 Ericson2314
Aim to converge with the state-level right-to-repair down the road.

If the manufacture stops releasing updates, they need to make it clear how you can update the device yourself without compromising security.


👤 khiqxj
Here's what needs to happen:

The tech industry is not ready for pervasive internet enabled devices that have microphones, cameras, or control heavy machinery. It all needs to be taken out. Aside from the threat to human life (due to malfunctioning vehicle software), we're heading straight for a dystopia where you can get arrested for walking down the street and committing a thought crime because every house will have cameras facing the street hooked into some company like Amazon that will simply be commandeered by the government to "fight crime because if you aren't giving us access to your camera you aren't against crime".

I don't know what legal movements this needs, it has to be something that doesn't backfire. The obvious thing to do is just ban Amazon from selling products like Ring, and remove software and radio communication from home appliances and vehicles. Software in a television shouldn't be legal. It has environmental consequences too not just privacy (which right now is a problem since every TV just scans what your watching and reports back to the company.


👤 vkaku
I've dealt with this multiple times, so let me give my perspective.

- It is hard for manufacturers to do this with small teams. Mostly because they do not always have good CI/CD or platforms available to keep being on top of vulnerabilities and so on and so forth.

- Not all manufacturers write their own software and often contract it out to other experts in the field. This includes firmware and app developers.

- If a manufacturer goes out of business or their website is hacked or whatever, the devices are going to send information to someone else, this is a big risk.

- A lot of blast damage can be contained if home devices use local / MDNS based service discovery as opposed to Internet based services. Many services could then either choose to reply locally or sometimes relay to the Internet if users policies allow. Unless people want other people unlocking their doors through the Internet, and they explicitly say it, Internet connection can not be mandated.

- If a producer goes out of business they should be forced to give out a signed firmware that disables the key checking, then put their source code for any users who wish to build and do it themselves.

Some of these will not be practical to get manufacturers to agree on.


👤 lrvick
I have personally found several IoT vulns in everything from Zoom devices to Japanese robot hotels, and I run a security consulting firm. Swooping in with my 2c.

Most of the time the engineers making these things -think- they are reasonably secure, but they tend to have little to no infosec experience and are moving too fast with no accountability.

Worse, even when there is some accountability such as code review, the release engineer creates the security problems at release time either as a supply chain attack or stupidity.

If I were making the rules, I would ramp up common sense supply chain accountability which would cause some of the most prevalent problems to be spotted early.

My wish list:

1. Require all source code be signed (git signatures or similar)

2. Require all source code reviews by peers be signed (minimum 1)

3. Require source code to compile deterministically

4. Require at least two individuals or entities verify code signatures, compile code, and compare identical hashes

5. Require proprietary firmware products have an external security firm on retainer incrementally reviewing code (including dependencies!), as well as reproducing, and co-signing releases.

6. Require proprietary products use a source code escrow service that will make their code public the day support and security updates stop so the consumer community can patch for themselves

7. Require open source firmware products have a bug bounty program (potentially with government funding like the EU does)

Happy to chat about this sort of thing with anyone interested. Contact info in my bio.


👤 dtaht
This is not really relevant to the proposed rulemaking but it is something that bugs me deeply, and would like to get off my chest. I would like to see a mandate that red LED be wired inband to every camera and microphone, on every device, so if it is powered up, the LED is also. This is what John Gilmore proposed in 2004, and we adopted in the OLPC project, as the first step towards not being ubiquitously surveilled. It is low cost, low power, and easy to implement.

👤 karteum
You might be interested in this (in French, but you may try with google translate or other alternatives :) https://www.bortzmeyer.org/8240.html

It is a very interesting explanation of debates around https://www.rfc-editor.org/rfc/rfc8240.txt


👤 consumer451
This is likely outside of the scope of this proposal, but my red team brain sees IoT devices from China as a distributed Trojan Horse.

In a time of conflict with China, firmware updates will be sent which will create the largest DDOS botnet in history.

Our cheap IoT lightbulbs will take down major internet infrastructure.

I don’t know the solution to that problem, but it’s a problem. Isn’t it?


👤 ckwalsh
Is there any way to tie an expectation of long term security support with legal protection of the product against competitors/reverse engineers/other parties that manufacturers may not want looking too closely?

I’m not suggesting granting additional protections to manufacturers, but codify an expectation of “if you abandon it, other people can come in and potentially salvage it”


👤 herf
1. The platform doesn't exist to do basic things like authentication or push notifications, so people hack together expensive cloud services or do insecure things instead. There is no rulemaking that will help this, we just need a better and universal platform.

2. "Keeping things patched" works in a world with Apple-like margins and it simply does not in a low-volume, startup-oriented, competitive market. No rules pay for a company to spend $500k a year on a dev team when a product didn't make enough profit - developers cost a ton and make the prices much higher, so these products are not the ones that win in the market.

3. If open standards cannot satisfy #1, then we have to look for other corporate structures, like a "Microsoft + Intel" marriage where hardware can be sold for cheap but the software remains supported by third parties. We see some of this with cloud companies like Alexa, Apple, and Google Home, but it's not really healthy yet, because there are no incentives to do things on the LAN in a secure way, so we are just hiding the costs of servers in other ways.


👤 userbinator
I agree with the others about requiring the ability to modify the software on devices one owns, but the other major threat that you should take into account is the increasing use of "security" to justify corporate authoritarianism. In any effort to add regulation, let's not forget the very important principle on which the country was founded: freedom.

👤 Joel_Mckay
People need to admit one can't protect systems from the unknown. Thus, detection and incident handling is arguably more important.

If CISCO/Google/Amazon/Microsoft can't keep their systems clean, than you can be certain adversaries who feign ignorance will be much worse regardless of the paperwork.

A certification program similar to EMC testing in labs would however be favorable, as it does not consolidate the attack surfaces. Additionally, it allows the test to evolve with emerging threat vectors.

If you think companies will let people poke around their security policies for paperwork stamps, than you are fooling yourselves.

Good luck, =)


👤 transpute
For visibility into Linux IoT firmware contents, you can upload the public firmware binary for any IoT device to the Microsoft binary analysis service. This free service is based on their acquisition of ReFirm Labs Binwalk Enterprise.

https://techcommunity.microsoft.com/t5/microsoft-defender-fo...

> Firmware analysis takes a binary firmware image that runs on an IoT device and conducts an automated analysis to identify potential security vulnerabilities and weaknesses. This analysis provides insights into the software inventory, weaknesses, and certificates of IoT devices without requiring an endpoint agent to be deployed.


👤 whats_a_quasar
A required support period of some number of years is problematic for products developed by startups, because startups cannot guarantee that they will still exist to provide support in several years. They can have the best of intentions and excellent engineering, but still fail in the market and be unable to keep maintaining a device. So requiring security support for several years wouldn't have any effect on these devices, because the company will be gone and there won't be anyone to take an enforcement action against.

For this situation you could require additional disclosure to consumers at the time of sale, along the lines of "We can't guarantee that we'll provide security updates." Or you could require open-sourcing of source code if a company goes defunct. Or perhaps device manufacturers could be required to hand the technology of to some third party maintainer if they go under.

Any regulation at end-of-support-life has the "company disappears" problem, so probably disclosure at time of sale is the only way to handle it reliably.


👤 slicktux
What does this mean for DIY hardware? For one I like my ESP32 and Arduino hardware because I can do whatever the heck I want with it. Will I be limited in choices if a bill/regulation is passed so as to make it restrictive on buying IoT hardware that’s not compliant with “security features “?

👤 hsbauauvhabzb
I’ll preface this with the fact that I am a security engineer (‘penetration tester’) but I probably don’t live in your country.

I care more about my privacy than I do about the security of my device, but an architecture that supports the second almost always supports the first (my neighbours hacking my zigbee isn’t a threat model almost anyone should be concerned about, unless there’s a pattern of hacking en masse).

I found out my smart lights literally have a microphone in them the other day, under the guise of ‘plays light to your music’ or something.

I architect my IOT by implementing a network without internet, fronted by home assistant - that way my devices can’t ‘phone home’ which who knows what privacy infringing crap.

I know that botnets driven by iot is a real and ongoing problem, but it’s a problem that is probably not going to get much vendor buyin without regulation, but what I’m pointing out is that it’s not the only threat facing these devices.

I want:

- clear guides on what data is collected, I don’t trust they’d only use it the way they say they will so I wouldn’t bother reading that part, if it existed

- the ability to opt out of any data collection aside from anything required to do technical updates. I want any data transfer to occur in a clear and auditable way (e.g the ability to inject a root CA and perform a mitm if I wish)

- enough protocol spec that at least basic functions can work offline via a system like home assistant

These won’t directly solve the ddos vectors, but they will solve the problems that come shortly after on the timeline.


👤 scottcodie
Is there a definition of a "security update"? Software has an infinite number of bugs and it is cost infeasible to fix them all. If it's years down the road, the engineers that wrote the code may be long gone.

👤 simne
Greetings from Ukraine, European country with real Great war just now.

I must say, we see extreme grow of cyber-crime as part of modern war. I think, in nearest future, cold war will guaranteed have huge cyber-crime part.

And, hacking of IoT devices has very significant share of cyber-crime now. For real war it is question of life and death, because hacked devices with radio emission, are used by hostile intelligence, to find targets for attacks of heavy weapon, but also, we seen cyber attacks on electric-energy infrastructure, indented to make blackout (fortunately for us, unsuccessful).

Chinese IoT devices are very special part of question, in many cases are connected to Chinese clouds, and this is also extremely dangerous, not only because potential unfriendly Chinese moves, but also because their security is not good enough, so in many cases, cyber-crime could intercept communications and interfere operation of device or even hijack control.

For example, exists smart door locks with camera and I hear hackers hacked them and used them to observe work of air defense, so enemy could tune their air attacks to make more harm.

In civilian life without war, videos from hacked door locks (or other IoT cameras) could be used for illegal surveillance, to coordinate riots, etc.


👤 convivialdingo
As a developer and a consumer, what I'd really like to see is:

- Manufacturer voluntary guarantee of 1/3/5 years security updates with an expiration date.

- Separation of functionality and security updates.

- The ability to "turn off" connectivity and retain full local functionality.

- An industry security certification like UL.

- A single point way of identifying and validating devices.

As it is, I avoid using IoT mostly for security reasons. Having worked in security for many years I have seen the best and the worst. Having security isn't a panacea either - it needs an ongoing management & reporting infrastructure.


👤 smingo
Regulating here is necessary, but the challenge is steep! IoT devices may include a complex bill of materials (BoM) including software (SBoM). Vulnerabilities can appear in any of those components.

On the one hand, CVE and vulnerability databases are excellent, and with some automation of vulnerability and patch availability the's the possibility of automated re-build.

But the manifests can be huge. And some component could be vulnerable, but was never anticipated to be so, and perhaps doesn't even have the means to be patched. Update processes for sub-sub-components may not have been exercised, and could lead to bricked products.

So labelling and guarantees are welcomed. But the challenge is practically insurmountable, and until the entire industry steps up to meet it, labelling and guarantees are going to be 'best effort'.


👤 codedokode
I am not an US citizen, so I cannot have a say here, but I have another idea.

What's the problem if an IoT device is vulnerable? In the worst case the user will have to buy a new one (or pay someone for fixing it). Is it a serious problem? I think, no. Eventually users will understand which manufacturers are reliable and which are not.

You probably want to argue, that infected devices can be used in DDoS attacks. But in this case, why don't you take measures against DDoS attacks directly and leave IoT devices alone?

Why, despite Internet existing for 50 years, there is no protocol, using which any host can demand all upstream providers to block traffic from specific IP addresses? This would make low-level (transport-level and below) DDoS attacks impossible.

Make such standard and make it required for all top-level ISPs. In this case the malicious traffic can be stopped at source network or at least at Tier-1 level. Middlemen like Cloudflare would become unnecessary, and you would be able to withstand a multigigabit DDoS attack even having just $5 VPS.


👤 TheMagicHorsey
This is one of those areas where regulation might cause unintended consequences that are worse than the current state of affairs.

👤 SimingtonFCC
Thank you so much everyone for the interesting, high-quality discussion so far. Me and my team are looking forward to continuing to engage with you for at least a few more hours.

Just a reminder: As fun as discussing this in here with you is, the best way to influence what the FCC ends up doing is to file an official comment by September 25th at https://www.fcc.gov/ecfs/search/docket-detail/23-239 . Click to file either an ‘express’ comment (type into a textbox) or a ‘standard’ comment (upload a PDF). The FCC is required to address your arguments when it issues its final rules. All options are on the table, so don’t hold back, but do make your arguments as clear as possible so even lawyers can understand them. If you have a qualification (line of work, special degree, years of experience, etc.) that would bolster the credibility of your official comment, be sure to mention that, but the only necessary qualification is being an interested member of the public.

Finally, I'd like to extend a special thanks to dang and the rest of the HN team for their help putting this together. They have been a pleasure to work with.


👤 pyuser583
This seems like a strange regulation. Companies would simply comply by doing pro forms updates for the sake of updating.

Additionally, the update process itself introduces security vulnerabilities.

An IoT device might have a lifespan of 20 years. Let’s say a company is required to update for a 10 years. For the subsequent 10, that process is nothing more than a vector for malware injection.

The most serious type of vulnerability is an unauthorized, unbounded, write operation.

One of the most secure architectures is “stateless.” That’s where the software is hardcoded into the software. This proposal would outlaw that approach. It’s not for all situations, but it should be seriously considered.

The real solution is to hold companies accountable for vulnerabilities.

I suspect this is better done by the FTC.

Perhaps your time would be better spent fighting DRM on broadcast television, or ensuring cell towers aren’t tracking people, or that phone calls have crypto enabled by default.


👤 AtNightWeCode
Maybe it is the price model. It should state what type of security updates are included and for how long these are for free and what the expected cost after that is. I think a problem is that you buy a thing and not a thing and a service.

👤 fdschoeneman
I appreciate you for coming to this form and soliciting input. There are a lot of smart people here and they're likely to have good opinions.

My own opinion, though, is that regulations like this should only be written when no reasonable alternative to them exists and as a last resort. And I think there is still plenty of room and time for industry to come up with its own certification programs, kind of like organic certifications in food. What actions have you taken or do you plan to take to encourage industry to take care of this without being forced to at the point of a gun?


👤 heisenbit
Way back it was not so uncommon that companies buying software had access to the sources. When they did not have access to the sources and the machine code was supplied by a small vendor there were escrow agreements for the source just in case the vendor went out of business or died. No matter what rules are put in place companies will find a way around them or will not be able to follow them at all.

Abandoned IoT devices if there are few of them are an issue for the consumer and they should be enabled to at least help themselves or hire help.

Abandoned IoT devices if they are in larger numbers can pose a threat to the network and one may consider what could be done about those.

Abandoned IoT devices also are an economic burden as they cause a premature loss of value of investments of consumers and investors. Worse are way too many horror stories out there of critical infrastructure running on outdated platforms. These devices were bought at a time where computers were rare. If we extrapolate 15 years from now based on the experience we have today - we will not see accumulated economic loss but disasters too.


👤 MayeulC
EU citizen here, but FCC regulations also apply to me indirectly :) Not sure if I am allowed to comment there.

I echo the many voices here that do not connect their devices to the Internet. Of course, I am in the minority that attaches a great deal of importance to these questions, so my devices either run free software (Tasmota and Esphome are two main ones), or have no Internet access (currently with ZigBee, though I have also used Vlans and separate Wi-Fi access points). The only service accessible from the outside (not firewalled) is HomeAssistant.

I can see manufacturers doing the same, with a direct line to their servers for controlling smart devices, but that's only as good as the manufacturer's cyber security practices, what if they are hacked?

Reducing internet exposure to "gateways" (HomeAssistant) in my case distributes a bit the problem, but also reduces the number of devices that have to be maintained up-to-date, so this may be an interesting avenue, especially if manufacturers are pushed to include a dedicated "IoT VLAN" and SSID in every (Wi-Fi) router that makes it easy for every consumer to adopt a separate network.

To summarize, here are the main elements that could help, in my opinion:

* Ensure that every IoT device (often sensors and actuators) has basic functionality available even when completely offline (including during initial setup).

* Ideally these bits of interaction should take place over a standardized interface (the Matter standard seems to fit this perfectly).

* Maybe consider strongly incentivizing gateways instead of directly connecting IoT devices to the Internet, and hold these to higher security standards?


👤 eternityforest
My big concern is with devices being obsolete due to the cloud servers going down.

I am all for high security IoT, but only if the requirements don't make it harder to use local-first APIs, and don't add too much cost to these devices, or stop small companies from making open source devices.

In fact, I think I would even prefer that requirements do not apply at all unless the device communicates directly over the public internet, or over proprietary wireless networking, or where the device's failure could cause a safety hazard.

I would not want to see anything get in the way of, say, making a cheap motion sensor that connects to WiFi and allows open access via the already-private WiFi network, or a BLE sensor tag that broadcasts open data.

Not all devices need updates, or encrypted protocols.


👤 nfriedly
Here's a few things that I would find useful for IoT device labeling:

1) An indication of whether it supports local control, or requires an internet connection. (I can control my Philips Hue lights from my phone even when the internet is down, as long as they're on the same network. However, my MyQ Garage Door opener can only be controlled from my phone when both the phone and the garage door have an internet connection.)

The reason this relates to security is that devices supporting local control can be completely firewalled, significantly reducing the attack surface.

2) A commitment of how long the device will receive updates.

It should probably be in the form of an end date, or a number of years from the date of purchase, because "5 years of support" currently often means "5 years from when the product was first sold, which was 4 years ago, so you really only get 1 year of support if you buy today".

Separate dates for features and security fixes would be acceptable.

There should probably be some provision for vendors to extend this, since they would generally want to for devices that sell better. They should just be prevented from shortening it without penalties (e.g. a full refund offered to all purchasers.)

3) A clear indication of what happens after the above date passes - does the device continue working? become completely inoperable? loose some features? etc.

4) Some indication of openness, ideally in a way that encourages it. Maybe an "A+" rating requires that the vendor make available everything needed to compile and install a new firmware.

Some differentiation could be made between devices that are open at the time of purchase and devices where the vendor has made a commitment to open it up after the support period ends.


👤 ultra_nick
I'd like if all IOT devices had an offline mode where network access is shut off.

Most companies don't have the skills to make their devices secure. We should have the option to buy devices without network access. Devices that haven't been updated for a year should shutoff their network access automatically.


👤 freeopinion
I have two points. First, remember that things like cell phones are also IoT. Laptops are IoT. So make sure that your regulations make sense for all IoT and make sure they apply to all IoT.

Second, I tend to be libertarian. I'd prefer less regulation, not more. So if your concerns are about security update timeframes, I like your idea to treat it more like "Best if used by" labeling. That is, if somebody wants to bake some bread without any preservatives that should be consumed within 24 hours, that's ok. You don't force any lifetime. You just have that lifetime clearly displayed on the packaging.

Some people would prefer bread that "expires" sooner than later. It could become a badge of honor. Likewise, a cell phone or mesh router that "expires" later than sooner could become a badge of honor. It could become a thing. It could be a feature that affects sales.

Keep the regulation as light as possible and be smart about it so that it can still accomplish your desire. Give the labels some standard presentation and prominence like Nutrition Facts. I think it is a great idea.


👤 nonrandomstring
Forgive my redundancy as I am surely saying what has already been said in other comments I've not yet read.

I have one cast-iron, non-negotiable relation with IoT objects:

No IoT device should ever hide what it is doing from me.

If it lives in my house, car or environment it is not acceptable for me to install something which then tries to hide its operations.

I don't care who it's made by or how much they think they are "helping me".

Whether by encrypted tunnelling, side channels, secure enclaves or whatever devious shitfuckery, if I can't put Wireshark on my network and put a name and purpose to every operation then I'll track down the origin and eliminate it.

Further, I am simply not interested in any "arguments" claiming this is "necessary". It is not. Profit, spying and control are always the ulterior motive, and they are unacceptable. Do not allow anyone to pretend they are "security" and hide behind that word. Security for whom, from whom and to what end....

Also there should be no "embedded function" which I cannot turn off with confidence, perhaps by a physical jumper or switch. No IoT device should ever reset itself to insecure defaults.

Thanks for enquiring and good luck in your difficult work.


👤 Sparkyte
I will need to read more about this over time. How does this effect Fedramp based stuff?

I think FCC needs to step in on other things too. For one I think digital scalping has gotten out of hand. Something needs to be done to protect users and consumers. I think we need regulation on this matter. People use bots to buy up tickets or products and resell. The difficult part is how to enforce and prevent scalping.


👤 blondie9x
What about requiring proactive exploit identification to be paid out and then patched by manufacturers for reasonable life of equipment?

The reasonable life of the equipment should be based on equipment type. It should also be clearly described and communicated to consumers how much support the manufacturer will provide the product over the life of the product and into future.

Manufactures should have to estimate how long a product can last and will receive updates based on their testing and in reasonable conditions of use. And what environment it was tested in.

Also, based on the Samsung recent leaks where many accounts were related to IOT devices we know the security risk is not just at the device level but in a broader sense the company itself.

How a company manages and deletes / archives data safely is a paramount issue that also flows into securing IOT devices better as well.


👤 exabrial
** THE REAL PROBLEM **

Is companies making shit that has not business connecting to the internet. _THAT_ needs to be regulated. Why does your car need an internet connection? Why does your fridge need an internet connection? What does your robovac need an internet connection? ALL of these items could work just fine with zero internet connect, or a simple LAN connection.

If we could regulate that, we'd solve 95% of the problem and the IOT Update problem goes away. Ounce of prevention worth a Terabit-Scale DDOS of cure.


👤 yacine_
YOU ARE ON THE WRONG WEBSITE!!!

👤 wiremine
Thanks for advocating for these issues. They are important.

I'm the CTO of a small software studio who has worked almost exclusively in the IoT space for the last 8 years. We've worked on large, Fortune 500 companies, all the way down to startups. Half our projects have been for consumer IoT, the other half for B2B projects.

There are two core financial realities that regulators need to understand:

1. From a purely financial perspective, most manufacturers do not have financial models that make perpetual, ongoing updates possible. Without a recurring revenue stream tied directly to a device, any software update reduces the return on investment for a product. For example, say you make a consumer IoT device and sell it without a subscription. The BOM might be $50 and the NRE is $50. You might sell it for $200, and make $100 profit. However, without a revenue stream of some sort, that profit needs to somehow support all the software updates those devices will receive in the future. Say each update costs $5 to build and deploy, and you release once a year. After 4 years you've spent $20 in updates, likely more due to inflation. At some point this becomes a losing proposition. The fact that GAAP says non-recurring engineering can be capitalized, but the maintenance cannot, creates even more issues. And due to the maturity of security, it's difficult, if not possible, to guess upfront how many updates a device might require.

This problem is compounded by the poor software practices at most manufacturers. It is non trivial to set up a software practice that keeps the cost of ongoing development in check. More model numbers and large fleets increase the development and QA costs. The per unit costs go down, but the absolute dollars go up.

2. The second issue is that IoT devices have a symbiotic relationship with other systems, and the financials for the overall product are tightly coupled. For example, consider cold chain monitoring systems. These devices are simple: measure the temperature, and send the data to a cloud-based pipeline. Alerts are forwarded to another. The value is in the outcome: alerting users to problems. However, to be competitive, the manufacturer might sell the hardware at a loss. If the _system_ is unprofitable, the vendor might turn off the system. In this case, it's hard to demand that the vendor provide updates for a defunct system. In the worst case, companies will go out of business and all the regulation in the world won't really help the consumer. Or the large companies will simply set up shell corporations to shield themselves.

Some in this thread suggested that all the software be open sourced. This is a fool's errand, IMHO. Forcing manufacturers to do this isn't viable because they often don't own the entire software stack: they have suppliers who own the IP, who in turn have their own suppliers. And it's really hard to draw the line in embedded systems. The BSP, RTOS/kernel and applications are all tightly coupled.

"But I bought the software!" you say. Yes, you did. You bought a binary snapshot in time of a software system. Unless you're paying for perpetual updates, you didn't buy unlimited free updates into perpetuity. And you definitely didn't buy source code.

So, what's the solution? I don't have any silver bullets, but here are some thoughts:

1. For many devices, allowing the user to replace the microcontroller outright is the ultimate consumer safeguard, while protecting IP. If the vendor doesn't provide updates, or goes out of business, the owner can replace the MCU or SOC. And it protects the IP of the manufacturer's supply chain. This requires a clean separation of concerns, and it also would allow competitors into the market. But this is the best actual safeguard to consumer protections and respecting IP. There are a lot of ramifications to this. The user instantly voids the warranty, and would need to take responsibility for the security and safety of the system. But for a lot of use cases, this is fine.

2. Incentivizing a secondary market to encourage third-parties to take over failed systems. For example, if a vendor winds down a failed IoT project, give them a tax break to sell the system to a third-party, or open source it. This would give vendors a lot more reasons to try and own/license all the IP to make it open source-able. There _are_ knock on effects: the third-party might charge for updates, or raise prices.

3. Incentivizing common hardware/software platforms. Here's the reality: most manufacturers don't really want to write their own software. They want to sell hardware: it's what they're good at. Encourage more generalized software architectures for IoT devices. This will reduce the costs and enable parts of the system to be updated, without requiring access to the entire system.


👤 hot_gril
What qualifies as a device having security support? That known vulnerabilities reported to the FCC are patched within a reasonable amount of time?

👤 heroprotagonist
Can we actually trust the FCC comment process now? It's been astroturfed by interest groups for well over a decade now and that's only getting worse with AI generated content.

👤 hot_gril
I'd like some optional qualification guaranteeing support similar to what autos have. There are recalls for safety issues, refunds for negligence, etc for a reasonable amount of time. It's fine if not all products meet this bar, but it'd be good for highly-committed IoT companies to be able to distinguish themselves from the cheapos.

I don't feel the same way about other computer hardware/software because the risk is far less physical in most cases. IoT has the unique ability to cause real damage.


👤 worthless443
Such a transparent and clear post, therefore first, Thank you! Based on my experience with propitiatory IoT devices and protocols, vendors seem to be kind of unwary when it comes to potential security vulnerabilities and exploits in their protocol or firmware of the devices. As I understand, it's now all on us consumers to deliberately report insights to regulatory authorities and respective lawyers and hope everything comes together.

👤 elihu
Some ideas:

Customers should be able to return for a full refund any products that have security vulnerabilities that aren't addressed within the support period.

Companies could opt to participate in a source code escrow program where the source code for the product is deposited with a third party, and if the company goes out of business or something, the source code is released with a sufficiently-permissive license that a sufficiently-motivated user community can fix bugs themselves and distribute them (but not necessarily use the code in other unrelated/competing products unless the company is okay with that).

Companies should be required to disclose up-front any classes of vulnerability that they don't consider to be a security flaw. (E.g. a software product probably wouldn't be secure when run in an operating system that has been compromised by a malicious actor, or a network security product might not be secure against an attacker with physical access.)

Just as a matter of terminology, I think it would be appropriate to refer to software security patches as product recalls, because that's effectively what they are.

In the long run, I'd like to see a system where organizations could run something like a combination comilation/notary service. For instance, you have a server somewhere that people or companies can submit code to, and the server compiles the software and issues a digital signature for the compiled binary attesting that it compiled with no errors or warnings, and their linter couldn't find any problems. For something like C++ this might not be very interesting, but languages with stronger type guarantees might provide some confidence the program is at least not doing something that's nonsense. (Whether it's correct is a different problem than whether it's at least using memory and concurrency primitives in a sane way.) Someone might upload their code as safe Rust or Haskell or Agda or whatever, and the service could say "yeah, we're pretty sure this is memory safe and doesn't exercise undefined behavior." Companies could seek certificates from whoever the most respected compilation services are at the moment.


👤 WalterBright
The software industry in the US is enormously productive. It's the crown jewel of the American economy, and frankly props up our economy.

And it's completely unregulated.

Please, let's resist the siren song of regulation and it's inevitable unintended and undesirable side effects.


👤 blubbity
Whatever the regulation, please please please may it only apply to companies above a certain revenue (or some other metric). Regulation of this form has a dampening effect on innovation, and this would be a straw on the camels back for startups.

👤 pledess
I'd prefer a different solution in which the ecosystem of off-the-shelf consumer-grade routers and behind-the-router IoT devices cooperates to block Internet access, for a single IoT device, if that device is in a suitably exploitable state.

This isn't a perfect solution but your "support their devices with security updates for a reasonable amount of time" is a non-starter. For example, suppose I'm designing a doll for the Christmas 2024 season. The doll uses the Internet because it's an AI product that converses with young children about the latest STEM news. I don't know how long it'll be used: maybe my eight year old daughter will just find it boring, or maybe she'll physically destroy the doll because she disagrees with its opinion on the Riemann hypothesis.

I can't afford to maintain firmware beyond January 2025. If I have to commit, I'll just never release the product, and children will potentially have worse learning outcomes forever. But I am willing to have my 1.0 firmware send beacon frames to cooperating routers, announcing that my combination of product ID and patch level is a8217a61-09de-4b1e-8a99-b6fbc180cdce, and please blackhole me if this is a dangerous version. This requires more engineering to work effectively, but please don't stifle innovation by small IoT vendors who cannot commit firmware-maintenance resources to a product with an unknown revenue stream.


👤 Charon77
Not a US citizen so I'm not sure if I can even choose a state so here it is.

IoT devices are almost always data collection devices: CCTVs, thermostat sensors, smart light switches. The IoT devices do not only exist in the comfort of American home, but also American industries, American office buildings, and unlike your computer browser that receives security updates every 2 weeks or so, IoT devices are probably 5+ years out of date and are succeptible to hacks discovered over the years. Hacks can be discovered by malicious individuals, and distributed everywhere over the course of hours, so called 0 days attack. Think ransomware, cryptominers, backdoors. What would've happened over the course of years? and unlike computer devices where we have vulnerable softwares, IoT devices are typically low level, in which a security compromise in the network can definitely compromise the entire hardware, by exposing secret data locked only by software such as leaking privte keys, to allowing attackers to install custom software on cameras in your homes.

And worse, there's little incentive for manufacturers to continue supporting 5+ years in service, even though they are still in service. They may stop that line of products and sell new products instead, or even worse went out of business.

We have been putting to much trusts on the industry, but we need security to protect those who are vulnerable. Do you know when's the last time the camera in your dining room got its security update?

Manufacturers should be liable to ensure informational safety of its equipments. They need to specify on the device until when security updates are guaranteed, and beyond that, consumers are to be responsible for the device, either by using third party security updates at theie own risks, or by getting newer models with newer security guarantees.

So the call to actions would be:

1. Limited period mandatory security updates that is communicated on the devices. 2. Allowing any third parties to make changes on the device, especially after the security period is over.

One consideration is of course, opensourcing its software. The internet is really quick to spot security issues and even proposes the fix, and this would come at no financial cost to the manufacturers that don't have the incentive to test the security on their devices.


👤 kevin_thibedeau
If you're going to mandate support for firmware updates on just a subset of products you may as well do it for all of part 15. Companies will find a way to game the system to shirk their responsibility if there are any loopholes based on product classification.

The FCC could also stand to step up seizures of imports that lack proper EMI shielding and other compliance issues. Nobody seems to care to address those violations so why should IoT be singled out? Want to boost electronics production in the America's zone to gain independence from SEA? Stop forcing companies obeying the law to compete against law breakers.


👤 Khelavaster
Echoing other comments that forced security updates feel like compelled speech.

It's reasonable to expect that a company may be expected to uphold any contracts they enter into It's reasonable to require companies to disclose their update policies and uphold those policies.

It's not reasonable for companies to be compelled to add any security updates for their IoT devices


👤 paulwilsondev
Less government and more wild west would be appreciated.

👤 behringer
it would be better if the FCC demanded open source and/or open hardware. For example, make GPIO pins required, no undocumented black blobs. Service manuals available.

A manufacturer shouldn't be stuck supporting hardware they don't want to, but they absolutely should give us the ability to repair and manage our own devices.


👤 diogenes4
Is there any way to get people a way to opt out of unnecessary internet connectivity? It seems to be spreading to include functionality that shouldn't require internet connectivity to begin with, like basic data storage and controls from a local network.

👤 AnCapAndrew
Stay away with your regulatory capture garbage. Software is free speech you have no power here.

👤 AbrahamParangi
In an ideal world, IoT and smart device manufacturers would be required to pay for “end of life” insurance.

Ridiculous that a company can brick your devices by shutting down their servers and/or going out of business. Some people have suggested forcing companies to open source their software on death so that someone, somewhere might be able to keep the smart devices alive but this is difficult implications for capitalizing hardware companies and probably causes more problems than it’s worth.

That’s why, instead I think we should make hardware companies which produce “smart” devices dependent on their software to function should be required to do what we make banks do: pay for the risk of their own failure.


👤 gzer0
Commissioner, thank you for raising this important issue and bringing more attention to IoT security. More transparency around support lifetimes is a step in the right direction, but I worry it doesn't go far enough.

The problem is that consumers simply aren't equipped to make security-informed decisions even with perfect information. How many years of updates matters little if the software has vulnerabilities to begin with. And there's no guarantee manufacturers will fully honor the length they claim.

Rather than disclosure rules, I believe we need minimum security standards that all IoT devices must meet before sale, eg:

No hard-coded credentials/keys Encryption of sensitive data Ability to patch known vulnerabilities Use of secure boot to prevent unauthorized firmware Standards could be tiered by device type and risk level. Compliance could be self-certified with spot audits, like PCI DSS.

This puts the burden on manufacturers to build more secure devices upfront, not just promise to patch them later. Consumers benefit from safer defaults without needing to become security experts themselves.

Standards also allow security to improve over time as threats evolve. Disclosure rules would quickly become static and outdated.

I'm sympathetic to manufacturer concerns about costs, but I think basic security is a reasonable consumer expectation by now. If we act soon, we can prevent IoT disasters before the market grows even further. I hope you'll consider proposing minimum standards within the FCC or partnering with other agencies like NIST who could.


👤 WaitWaitWha
Thanks for reaching out to the community.

Instead of mandatory updates, there are lower hanging fruits you can win, and will have just as much, if not more positive security impact.

1. No default password, one must be set at initial configuration

2. Devices must function without public internet connection (unless it is one of the device's primary function to transmit out)

3. Devices must function without centralized host

4. Explicit disclosure of all "phone home" destination hosts, and ability to change or disable this

5. Explicit disclosure what information is transmitted out, and ability to disable this

I think the above five can be implemented relatively easily, requires no continued maintenance from the manufacturers, and improves the CIA triad of IoTs.


👤 jimrandomh
(I also submitted this as an Express Comment in the proceeding. If you agree, consider also filing a comment.)

A lot of issues around IoT device security are hard, but there is one simple and easy piece of policy that would be a big win:

Make the requirements stricter if the product contains a microphone than if it doesn't.

Some device makers are putting microphones into devices that don't need them, to support functionality that isn't useful, just because microphones are cheap. For example, TCL (a Chinese television brand) puts microphones into its remote controls. They do this because while most people don't want to control devices by voice, a few people do, and microphones are very cheap. This is a problem because anything with a microphone in it is a valuable target for hackers; compromising a TV remote with a microphone is _useful_ to them, in ways the compromising eg a wifi-connected clothes dryer would not be. If adding a microphone to a device created additional legal requirements, vendors would stop putting them in places where they lack a legitimate purpose, and there would be fewer insecure microphones floating around.


👤 xvilka
I also recommend to consult with projects like LVFS/fwupd[1][2], that provide firmware updates for many different devices.

[1] https://fwupd.org/

[2] https://github.com/fwupd


👤 oars
What a great thread! The only place we can get discussions like these about a topic like IoT security is Hacker News.

👤 plaguepilled
I can't post on the official website (not a US citizen) but I have thought about this question a fair bit and have worked in security adjacent tech in a past life.

An important note: my suggestions would make many business models unviable. I see this as a win-win because I think that profiting on bad security is extremely unethical and should be illegal.

My requests are as follows:

1. It must be at least a 1-year jailable offence without bail to sell an IoT device that does not have the software and firmware 100% open source. This is the absolute minimum and allows end user auditing. Implementing anything else before this is meaningless.

2. The company must pledge to provide security updates for at least 5 years for any device they sell (if there is no sale, this should not apply).

3. For a security update to be valid in the eyes of the FCC, the update must be signed by an existing employee (accountability must be assigned).

4. If an IoT supplier wishes to aggregate data to sell to 3rd parties, this MUST be optional and it MUST be opt-in.

5. Vulnerability detection and registration must be handled by a 3rd party with a lodgement portal, and companies should have at most 1 month to patch it once the vulnerability has been lodged in the 3rd party portal. Failure to fix in time should accrue exponentiating fines.


👤 ginkgotree
I don't have anything of value to contribute past the insightful comments already made by others, but seeing an FCC regulator reach out to our shoe stringed highly qualified community un YC / HN, has got to be the most encouraging thing I have seen from the US government in over a decade. Thank you for your efforts on this important topic of how we address the wide spread security concern of IoT internet connected devices.

👤 demondemidi
Europe has CRA and RED. Maybe start with those?

👤 ur-whale
Make it illegal to sell an IoT device with closed-source firmware, with punitive damages in case of breach that get distributed directly to the buyer.

👤 klntsky
I might be late to the party but I think that the government simply should be kept out of regulating consumer electronics. Top-down control doesn't work well, and even if it did, I question the motivation. Why should my smart fridge be regulated essentially by the same body that has the ultimate right to excercise violence pro-actively? It makes no sense

👤 a-dub
require minimum supported lifetimes of 5 years or full open sourcing of software (device and server) in the event that support ceases prior to the required lifetime.

gradually expand that lifetime out to ten.


👤 jiveturkey
There don't need to be complicated rules.

1. Manufacturers must maintain a VDP. 90 day common fix committment; 180 days for medical devices and certain other "loss of life" critical equipment that may be quite more difficult to update than your standard IoT device; 30 days for security equipment, incl cameras that have a physical security application.

2. GDPR level of fines, liability extending to directors. Window of liability is a "security warranty" lifetime of the product, minimum 1 year.

eg Jeep has a vulnerability that allows remote control of the vehicle. As we score this CVSS 10.0, they must fix and deliver a fix to all users within 90 days. We don't consider this a medical device, even though a vehicle malfunction certainly can lead to loss of life. Failure to have a fix available in 90 days results in 0.5% revenue fine per month after 90 days.

eg Vulnerabilities are found and announced in St Jude Medical pacemakers in August. St Jude Medical sues the disclosers and refutes the claims. In October they release an update to fix some of the vulns. In Aug of the following year they fix the remaining vulns. Because the remaining vulns are CVSS medium, a fine of 0.25% per month is levied against Abbot, the new owner of St Jude Medical, for the 6 months beyond the 180 day window that the medium vulns were not repaired. No additional penalty is levied for suing the disclosers because the vulns were not responsibly disclosed. If instead, Abott never bought St Jude Medical and St Jude Medical had to declare bankruptcy, the fines are transferred to the directors.

eg TrackingPoint smart rifles are found to have a vuln where the hacker can change the aim of the rifle. TrackingPoint goes out of business before the 90 day window is up, for unrelated reasons. The company has no assets so liability goes to the directors. However, in this case there is no liability since the repair is easy: disable wifi. The wifi function is not essential to the operation of the device so this is deemed an adequate repair, even had the company survived.

eg Vulnerabilities are found in TRENDnet cameras, commonly used for security/surveillance application. The window on this is 30 days. 27 days later, TRENDnet announces an upcoming fix and 3 days later releases an update fixing the vulnerability. Liability is zero.


👤 dncornholio
Please don't fall into the conception that 'frequent updates' are a good thing. A finished product doesn't need updates.

👤 matheusmoreira
Thank you for doing this. I think you should go even further than mandating security updates. Companies should have to provide schematics, source code, everything we need to maintain the devices ourselves, especially if they have reached end of life. The corporations don't even need to incur the costs if they don't want to, they just need to stop getting in our way.

👤 LHelge
Hi from across the pond...

This is a great initiative and will be crucial for keeping a free and open society running smoothly in a more hostile information security environment.

My suggestion is to have a look at the work done in the European Union with Cyber Resillience Act. Much can be said about what EU is doing and the competence of the legislators there, but CRA is actually quite decent. If an upcoming FCC regulation would be somewhat aligned with what is required for CE-compliance, the life for us working with developing IoT-products with a global footprint will be much easier!


👤 bArray
> [..] I’m here to discuss security updates for IoT devices and how you can make a difference by filing comments with the FCC.

I'm obviously somewhat invested into the Hacker News platform, but please be sure to get opinions from other groups too. For example, hacker groups may have interesting comments on right to repair.

> I’ve advocated for the FCC to require device manufacturers to support their devices with security updates for a reasonable amount of time [1].

Either require updates, or open up the devices such that a community of open source developers can write updated software. The benefit of the second proposal is reducing e-waste after the update period elapses and encouraging up-cycling.

> If they meet certain criteria for the security of their product, manufacturers can put an FCC cybersecurity label on it.

It's only as good as you enforce it. It might be easier for somebody like Google to buy a small IoT company, fold it rather than invest in updates, then take the IP and employees.

The payment for the FCC sign-off could be large enough that the FCC could feasibly demand access to the internals and pay somebody to patch it on their behalf (like an insurance policy). Of course the preferable option is that the company addresses this themselves.

> I fought hard for one of these criteria to be the disclosure of how long the product will receive security updates.

Good, but not quite there. Providing some form of update is not the same as providing a comprehensive security update, or a timely one. There should be a window of something like 3 months (very generous) to make a good-faith effort to respond to an identified security issue (CVE?), either internally or externally.

The devices of course also need to demonstrate they are capable of receiving updates and that consumers are capable of applying an update, even if their infrastructure/company is down. For example, a signed patch could be applied reasonably easily via USB, or over a WiFi interface.


👤 lifeisstillgood
1. It's awesome to see, is it good government? happening here.

2. Why do we have to have internet enabled devices. I am not arguing for totally disconnected, but why be always on the dangerous internet. If you want updates or other transmissions be on low power, local or physical connections only. Set standards for these. Most obvious is bluetooth only devices. Now an attacker has to either compromise my bluetooth house controller (phone?) or has to visit my house.