Twitter / X.com vulnerability disclosure

2025/06/26

Jeroen Hermans

Translations of this article:
EN EN EN

While the audience of my blog is increasingly becoming more and more international I have decided that from now on all articles will be written in English.
Vulnerability disclosures are complicated…or well… they’re not. But somehow many companies, incl. multi-billion dollar ones, seem to think they are. This is a story of the what, why and how, but also with a hint of a solution.

I don’t think it is a secret that I am a big supporter of vulnerability disclosures. Unfortunately many disclosure procedures are extremely one-sided, sometimes even hostile towards security researchers.
This can sometimes create tension. But as the saying goes “Steel is forged in fire”, we sometimes have to create that fire to create something better and stronger.
It is always nice to see when companies see the benefit of a proper vulnerability disclosure procedure. Maybe the best example I have seen in the past year is Paxton Access Systems. Although the first contact was rather complicated in the end the communication with the security department was very direct and led to a safer product. And Paxton acknowledged this on their disclosure page .

Sometimes companies see such a page as something negative. But it is really a sign of a mature security minded organisation with proper disclosure procedures in place.

Another example is Yealink. Although a bit more fire was needed to forge this steel a very real-life improvement in the vulnerability disclosure was realised. On april 2nd 2025 the vulnerability disclosure procedure looked like this And as always the details are important. The very last chapter of this procedure is “Intellectual Property”. Now… this is interesting, because as far as I can see we are not transferring any IP… or are we?

Regardless of whether the report submission is confirmed to be valid, you hereby agree to transfer all rights and interests (including all intellectual property) of all submitted vulnerability reports to Yealink.

Ok. So we ARE transferring Intellectual Property. Even though this does not read as such, this is 100% an NDA. It is very hard to prosecute a researcher based on a legal process. It is not that hard though to prosecute a researcher who is publishing his own findings if he has actually transferred the IP of these findings to the manufacturer!

While forging the steel together with Yealink the vulnerability disclosure procedure was changed

And now the IP section reads:

By submitting your report, you hereby consent that Yealink may use your findings to remedy any security in Yealink products.
And this is of course exactly what we want. We are still forging, but I am extremely happy that this has been changed.

The legal complexity of being a security researcher starts to dawn on us. So how do we here at CloudAware do this? Why am I not in jail after hacking into tv studio entrance systems? The answer is not that simple, but I can give a few pointers we always apply to disclosures.

  • READ vulnerability disclosure procedures. If they look like an NDA and walk like one, it probably is one. Do not agree to these contracts. In your first communication be very clear about who you are, what you do and that you are unfortunately not able to comply with their procedure (but…there to help).
  • The Dutch Public Prosecution Service (OM) has written a Policy Paper on what is considered “Ethical hacking”.
    This Policy Paper in short states three important factors when deciding not to prosecute a security researcher:
    • Was the action taken in the context of a substantial societal interest?
    • Was the action proportionate (in other words: did the hacker not go further than necessary to achieve their objective)?
    • Was the requirement of subsidiarity met (in other words: were there no less intrusive means to achieve the hacker’s intended goal)?
  • And last but not least article 8 of the European Convention on Human Rights
There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

Ok ok. Now everybody has a headacke from all the laws and legal terms. How can you make sure the first contact with a security researcher is as smooth as possible? Luckily there is an RFC for that: “RFC 9116: A File Format to Aid in Security Vulnerability Disclosure”

An RFC is an implementation text for the internet. Because of RFCs e.g. all browsers can visit cloudaware.eu even though they have very different code bases. RFC 9116 describes how to publish a small textfile on your website. In this textfile you publish whom to contact for security related issues. The Dutch National Cyber Security Centre has published a document about the security.txt (in Dutch).

The security.txt is also in the mandatory list of standards for Dutch governments.

So… now we have a standardised way for security researchers to contact a company. The steel is forged, the fire is no longer needed. Unfortunately it is not that easy. As you have seen the RFC is a lengthy document and it may not be that easy to interpret. But surely the companies of the richest man in the world, Elon Musk, must have interpreted this RFC correctly. Let’s look at Tesla first. This company makes daily decisions about human lives using software so they should have a working security.txt in place.

Now that is too bad. Not only is the “Encryption” field not a PGP key as required by the RFC, the security.txt also completely lacks an Expires field. Because of this it is not clear whether this is an up-to-date security.txt.

Let’s have a look at another company of Elon Musk: x.com (or Twitter as I still call it). ( (mirror) )

Oh dear… Things are not getting any better here. The “Expires” field is in the past. Let’s try to contact X to get that fixed! The contact field contains a (valid) url .

This means Twitter is using the HackerOne system to triage incoming security reports. Now let’s assume I don’t have an account, so I have to create an account at hackerone. Then I try to make a report for X saying there is an expired Expires field in the security.txt. Now this is a problem because you need a reputation on HackerOne in order to report to X, called a “Signal Requirement”.

Caution!

X (Formerly Twitter) enforces a Signal Requirement that prevents you from submitting reports.

However, you have been granted a few trial reports to improve your Signal. Please use this opportunity to submit only accurate, high quality reports. Failure to do so will result in your inability to submit additional reports.

So the good people at X ALLOW me to help them with this report.
So I send in the report and I expect a message back: confirmed, will fix, etc. And indeed about a day later I get a message from “Analyst Trevor”:

Thank you for your submission. I hope you are well. Your report is currently being reviewed and the HackerOne triage team will get back to you once there is additional information to share.

Great! Things are rolling. Steel is being forged even without the need of fire. But you understand this article would not have existed without some fire. About a minute later “Analyst Trevor” sends me another message:

Unfortunately, this was submitted previously by another researcher, but we appreciate your work and look forward to additional reports from you.
At this time, we cannot add you to the original report as the report may contain additional information that we cannot share with you. This may include personal information or additional vulnerability information that shouldn't be exposed to other users. Thank you for your understanding.
Have a great day ahead!

Ok this is problematic. I don’t need the bounty money for this find (because…let’s be honoust it is not the biggest find), but now my “Signal” is still zero on Hackerone and I have spend my trial report opportunity.
Ah well… At least X knows about this problem and it will be fixed with the quickest of quick. Or is it?
My report was in february. It is now june. So… I see two possible scenarios here:
a) Security reports are not getting through to the correct people at X, or
b) X did not get another security report and they just fail to pay out bounties (up to 20.000 dollar for high severity finds)

Again: this is such a small report that I was not expecting any money for it, but this is troubling. If you are unable to fix such a tiny problem in half a year, what does that say about your incident procedures (ISO27001 anyone?).

As you can see the job of a security researcher is immensely complex. The risks you run are immense (your job is committing jailable offenses), the technical knowledge you need is complex and changes constantly and you need to be informed about (international) law in order to mitigate the risks mentioned earlier.

I think security.txt is a great way to publish to the world your vulnerability disclosure procedure is in place. It shows maturity in your organisation. But it is a bit technical to make one. Time and time again my customers told me: we do not have the knowledge and/or time to make a security.txt, so we don’t do it.
But: talk is cheap and I started forging and created the CloudAware Security.txt Generator .

Now you can generate a valid security.txt from within your browser, get help on every single step and even cryptographically sign the security.txt document so that you get that nice green tick on internet.nl!
And if you don’t trust me (and you shouldn’t!) it is fully open source on github where you can download the sourcecode and run it locally even without internet access. Enjoy!

I will be in contact…