Extent of Bugging and "Three Anomalies"

Extent of bugging

It is not impossible that the bugging of GSOC might have been limited to elements of AGS concerned about the Boylan affair compromising the mobile phones of the two GSOC officers leading the investigation.
If the phrase that was repeated back by a senior Garda was the result of eavesdropping, that eavesdropping might have been via one or both of the compromised mobile phones pre, during or post a conference in which people remember the phrase being used. Hopefully the ‘during’ is not a candidate. Bringing a mobile or any such device  – even if it appears to be turned off – into a room intended to be eavesdropping-proof is not good security. Ditto for going off-site for a discussion.

Although someone concerned with the Boylan affair might obtain sufficient information via the compromising of investigators’ phones, this does not mean that they would stop at that.
Appetites grow. As Morris noted: “such conduct will multiply if allowed to go unchecked”.




What then of the “three anomalies”?

There is a problem in that Verrimus’ brief was to identify potential threats. They had no brief to follow through and investigate those threats to the bitter end.
That resulted in Cooke passing opinion/judgement based on investigations that were not as complete as they could have been given more time and budget.

There are unaswered questions that would bring some clarity to the "three anomalies".

Polycom Unit

The best that Cook can offer the call-back to the Chairman’s phone is that it “
remains unexplained as a technical or scientific anomaly”. This might be all very well if the incident were taken in complete isolation. The problem is that this ignores context. It also ignores that two mobiles phones were almost certainly bugged.


Fake UK 3G Network / ISMI-catcher

Cooke’s Conclusion.4 (Page 48)
The fact that the communication with the test bed of the mobile phone company in the UK <b>may</b> provide an explanation for the detection of the UK code in question does not, of course, rule out the possibility that there was also an IMSI catcher being deployed in the area at the time. But if that were so, why would the third party engaged in covert surveillance make use of the “obscure” test bed code to create a fake base station rather than the code allocated to the network used by the subscribers intended to be targeted?

What’s this “may”?
The “may” might be based entirely on the 3-digit country code within the 5-digit country/network code. (9.70 – page 36). On that basis he says that the base station might have been the test bed, but “does not, of course, rule out the possibility that there was also an IMSI catcher being deployed”.
Cooke did look for information from the mobile company but by my reading failed to ask follow-up questions that could have brought more certainty to the matter.

What would be required is for the company to confirm that the running of its test bed would result in precisely what Verrimus observed.
Verrimus observed that the signal was not constant. They could only see it at certain dates and times. They could not see it at other dates and times. Do these conform to times that the test-bed was active and inactive?
This should  also cover the particular network code observed. Verrimus describe it as “obscure” – as in not a known public network code for the UK.  What network code did the test-bed use? Even if the same, there is still a question.

Cooke (above): “why would the third party engaged in covert surveillance make use of the ‘obscure’ test bed code”
One answer to that might be that the existence in the area of a test-bed with a UK country code would be perfect cover for an IMSI-catcher directed at UK mobiles of people who might have a level of awareness. If the catcher were set up to signal a peering/consolidation arrangement with normal UK subscriber network(s), then you have very covert surveillance with a misleading back-story for anyone who might notice and start questioning. I have never configured a normal cell-phone base station or an IMSI-catcher, so I can’t speak to the detailed technology of this. I can only speak to how I would set about at the design level to tailor such a system to do something for me.



Device 4B

This was found to have been opened and a component (lacking manufacturer ID) substituted.
9.55 In his statement he said,
“A tamper proof seal was broken and other visual signs of human interference were present. Finger smudges around internal screw holes being most of note.”
There is no explanation for this. The company that serviced the units from installation in 2007 and up to 2013 say that they never open the units. Their staff would therefore be unaware of any replacement of components. ( 9.71ff Page 36)

There was a question raised regarding dates on system files. These would not necessarily be definitive in determining mischief. I can give you a memory device with a file containing an image of a front page of one of today’s newspapers. The date on that could be in the past or indeed any date at all.
If I were to introduce malicious files into a system, I would work to ensure that datestamps were not accurate. A real date would allow any subsequent investigation to focus on people’s activity around that date/time.

If there had been no limit on brief and costs, then a thorough examination would have compared the BIOS and program code byte by byte between 4B and BC – and preferably with another of the same model. Any mis-matches would be the first program code to be disassembled to determine what the function of that code was. In the absence of recorded illicit content, this would be the alternative way of demonstrating an intrusion.
Account would be taken of any manufacturer-issued updates that maintenance records  indicated as having been applied to the units.
In addition, the capabilities and internal state of the audio-visual units being controlled via the panel would be examined. This does not appear to have been done.
The standard function of the AMX units was to control the equipment in the room. If someone had gone to the trouble of replacing components in the AMX, they could done likewise in other units depending on what existing capabilities they wanted to piggy-back on.

Cooke does seem to realise that an absence of a microphone in 4B is not necessarily a bar to eavesdropping. This is because the unit is part of an overall system containing audio-visual sensors (9.29 and 10.16) The possibilities would depend on the particular units involved, both as standard and potentially compromised, but these are not described.

There is an invalid assumption by Cooke in 10.10 (fed by 9.28)
He believes that replacement of components of 4B is not relevant and therefore does not require explanation. He believes that any configuration of some arbitrary non-standard function can be achieved via the standard interface panel. This is entirely mistaken. The whole point of replacing  components to compromise a system would be to make it do something additional that it is not designed to do.

He also assumes that any illicit content out of the unit and associated audio-visual equipment must necessarily go out to somewhere on the Net via the Bitbuzz network. This also is not a valid assumption.

Opportunistic use of a local legitimate WiFi hotspot that just happened to exist nearby would mostly be a convenience. What it would enable would be a monitoring from a completely remote location via the Internet and preferably via an anonymised server. It would remove the need to place a custom WiFi hotspot near the location when monitoring was to take place.
Piggybacking all traffic on a legitimate network such as Bitbuzz would carry risks however.
The management functions of that network could flag unusual behaviour and block the unit from connecting.
A well designed compromise that used a legitimate network for relay would minimise connection times by only connecting as needed. Alternatively information could be stored and compressed for later transmission in short bursts over time.
Also it might only go into eavesdropping mode or transmit collected content on an external command.
In 9.67, Cooke asserts that control of  4B by a remote entity was not possible – on the basis that “this would require particular software and configuration” of the device to request instruction and act on it. There is talk of the difficulty/impossibility of routing traffic from outside to a given device on a WiFi net. That talk is misleading.
If the real situation was that the unit had been compromised then he is totally mistaken. The compromise would be the “particular software and configuration”.

Think of it like this. You use your browser to view a web page. You read information and act on it. Doing the same thing by program is a quite trivial exercise. The unit could be made to issue e.g. a HTTP request at intervals. To an observer of the traffic, this would look like perfectly normal user behaviour accessing a website or whatever. In addition to requesting commands by this method, the unit could alert an outsider to some condition by issuing a tailored request. Even if an observer were decrypting the content of the packet traffic, such command request and alerts would look like perfectly normal user traffic to and from an app (which of course it would be. App, exploit = program code ).

To unlock the tinfoil achievement, the server could be set up on a freebie service and look like some mom-and-pop blog/site dealing with something innocuous. Commands with serials could be hidden in normal page markup code. Particular alerts would be encoded as parameters in page view requests or file uploads.

Even then there would be an issue with the use of Bitbuzz for transmission of content.
Bitbuzz apparently limits connection sessions to 20 minutes. If a conference running for longer than that were to be recorded live continuous  (as opposed to transmitted in intermittent compressed chunks over a longer time) this would entail reconnecting every 20 minutes or so. That sort of activity might be picked up as abusive by the management functions of the Bitbuzz network.

If I were designing such an intrusion, the ideal situation would be a compromised device or confederation of devices (system) on the inside. The system would preferably have writeable storage ability and would collect data in real time. I would be interested in audio and anything being displayed/projected by the conferencing system. As an example, I would be interested in seeing ‘Power Point slides’ as well as the talk over them.
I would have options on remote control and data collection.

I could use a local public WiFi purely for receipt of commands and issuing alerts. That very low-volume and low connection rates would fly very much under the radar.
When alerted by the system that content had been recorded, I could get near the building with my own WiFi hotspot. As soon as the system sees that particular hotspot, it would authenticate and deliver the content via my hotspot.
I would have overall monitoring and control from anywhere on the planet. I would only need to get near the building to collect content that had been alerted to me. My hotspot would only be detectable for the short time during which it was active. My compromised device inside the building would do nothing in relation to communicating with my hotspot until it became accessible.
Think of it like this: Go into the WiFi section of your phone/device settings. You’ll see a list of nearby open and protected WiFi nets. Your device was not actively searching for specific nets. It simply passively noted their presence. Your device can automatically connect to a previously connected net or you can have it authenticate.


If the system had limited data storage capability, I could alert and transmit an amount of the content in one session. I would have to act on the alert and get to the building to get my own hotspot to do the heavy lifting.
In either case, I could transmit some of the content in the alert as a sample so that I could evaluate if a trip to the building was justified.

Alternatively my own hot spot could do all the work, but that would depend on knowing when the target room was going to be used.


Try this at home kids:
If you have say an iPhone, you have an option to turn it into a hotspot. Any WiFi devices nearby will be able to see it. Depending on distances involved, you might need something a bit beefier to communicate with a device in a room high in a building along a street. The principle is the same. It still can very portable and unobtrusive.


Had the brief and cost allowed, the bios and program code in 4B ( with its unexplained component replacements and file datestamps) could have been examined specifically to see if any compromise had happened. Some or all might have been erased by a departing intruder, but it might still show traces.
The audio-visual units in the room could also be examined for any compromise and for their capabilities in relation to the AMX control panel.

Even without disassembly, there is a great deal that could be learned from looking at a ‘black box’ situation.
Put a standard AMX unit in that room and sniff what might happen v the Bitbuzz network over a few days. This time examine the content. Use the same weak WEP encryption and break it to examine content.
Examine any transmissions between audio-visual units and the control panel when those units are used.
Do a failed authentication with the Bitbuzz network and see what that does to the traffic. Do a successful authentication with BitBuzz and see what that does.
Compare all of that with what was observed by Verrimus at the beginning.
Reassemble 4B and do all the above. Do we see that same pattern and level of traffic as was initially observed?
Decrypt all packets.
What happens if you repair the broken display panel on 4B (or 4B clone) and use as originally configured to control the audio-visual units?
Ask Bitbuzz for records of maintenance to the coffee-shop base and to their systems that might affect connections to the network.
What is seen on the various incarnations of 4B above if they are probed by IP address? Many devices have internal web servers for remote configuration and reporting. e.g. Your home modem/router - your IP-connected printer – maybe your new fridge. Many of them are wide open to the same subnet if not also to the Net at large.
When testing the standard AMX unit, be aware of the
Don’t get fixated on Bitbuzz. Just check it out. Compromise was possible without Bitbuzz.
The approach is to understand everything



In the case of 4B all we know is that components were replaced at some stage unknown – and that this has not been explained.
The display panel on the unit was broken. When did that happen exactly?

Was there a telephone extension linked into the collection of audio-visual conferencing equipment in the room? That would be another route for content to be exported. Unit 4B has a base function to control all other units in the room. It has had components replaced and there is no explanation offered for that.
Have any of those units been opened up in search of components added, replaced. Has the software within them been analysed?

The investigation of possible exploits in the conference room was never fully completed. A brief to do so was never given.
It might be that there was actually no compromise of the equipment. A thorough analysis of all units in teh room might throw light on that.
The level of effort to implement such a compromise would be a considerable step beyond the very standard/routine level for the "ambient listening" that was almost certainly was done on the GSOC mobiles and very likely on the chairman's phone line.


Other pages

No comments:

Post a Comment