|This post was sponsored by Comcast but the actual contents and opinions are the sole views of MakeUseOf.com|
If you’re a regular MakeUseOf reader, you’re probably no stranger to replacing the original operating system on your computer with another. We’ve covered Linux on Macs, macOS on PCs, and Windows on barebones systems in prior posts.
But what about tablets? They’re just like smaller and thinner notebook computers, right?
Actually, a number of differences make installing a new operating system on a tablet more complicated. In this article we’ll look at some of these challenges.
How Tablets Differ From Notebooks and Other PCs
Before we dive into the specifics let’s consider how tablets are different from computers.
PC users can add different components to their machines at their whim. In contrast, devices like tablets are more like consumer electronics. In order to provide a fault-free experience, manufacturers control what hardware can go into the device. It follows that when manufacturers don’t need to worry about accommodating all the different hardware a user may try, they don’t need the software side that interfaces with that hardware.
And this is the main difference. PCs (most, anyway) are expandable and customizable systems. Tablets try to provide users with a specific function (even if that function is a fairly standard computing experience). And the software that comes with tablets has a lot more in common with device firmware (despite the inclusion of an operating system, e.g. Android).
Note: While I’ve used the term “tablet,” the following also applies to most phones and other types of devices running Android, for example the NVIDIA Shield console. While the concepts also apply to iOS devices, they can all be remedied by jailbreaking the device.
How to Prepare Before Replacing Your Tablet’s OS
While some tools such as ClockworkMod’s ROM Manager provide a point-and-click method to swap ROMs around based on your device model, you may need to do so by hand. Let’s look at some of the things you’ll need to prepare for prior to starting.
Tablets Don’t Provide Root-Level Access by Default
We’ve covered how to do this in our in-depth guide on rooting Android phones. In a nutshell, most devices don’t allow you access to system files, only your own data and apps. “Rooting” a device is the simple process of granting full read/write access to those files and directories.
Jailbreaking your iOS device or using an application such as SuperSU for Android will grant access to these system files. Gaining root access can make backing your device up much easier, with more sophisticated tools.
Pro Tip: Unlock root-level access. Seriously. If you want to change your OS, you’re already altering your device in a big way. Gaining access to these files is pretty tame by comparison.
Tablet Backup and Restore Isn’t Straightforward
As with any OS replacement, you should first back-up your existing system and make sure you can restore it if something goes wrong. On a PC, this can be as simple as cloning your hard drive or backing up select pieces of your system using software tools.
But on an Android tablet, the system lives on a few different partitions within the device’s memory, including:
- The boot partition, which contains the kernel and other data needed to start up the system.
- The system partition, which contains the bulk of the operating system such as applications and libraries.
- The recovery partition, which contains the tools needed to reset the device back to a “factory” state.
The above image shows Android app DiskInfo illustrating details of partitions like boot and /system. Some device models may have others that are important for backup purposes. You should confirm what these are and back them up before trying out a new tablet OS. Since all these partitions can be overwhelming, consider using some special tools. You can do it all at once with a NANdroid backup, or use one of the more powerful backup tools (both of these require root access).
Pro Tip: Make sure you’ve taken a backup of your device that can be easily restored.
Tablets Have Locked Bootloaders
In order to install an OS, you will typically boot your computer into some sort of installer. This may be a recovery disc you received with your hardware or a thumb drive you created yourself. Regardless, both the venerable BIOS and the modernized (but annoying) UEFI allow the user to boot other operating systems on some level.
But manufacturers typically lock the bootloader on a tablet, and you will need to unlock it. Some manufacturers provide applications to do the unlocking for you, while other devices have easily accessible commands to do so. Nonetheless, you’ll need to unlock that bootloader in order to install a new ROM.
Pro Tip: Ensure you have the right bootloader unlock tool for your tablet model. And before you start, understand what the consequences of using it are (e.g. voiding your warranty).
System Software Installation for Tablets Requires Care
With desktop and server operating systems, the installer is a sophisticated program. It can catalog a machine’s hardware, take some user input on what they’d like to do, and configure drivers or software as a result. Device makers know exactly what the hardware is going to be, and they’re defining the experience. They just need to push that OS to the device’s memory.
This involves the use of low-level tools like ADB or fastboot. One function of fastboot is writing system images, byte by byte, to the device flash memory. You’ll probably find yourself copying and pasting detailed commands like the following:
fastboot flash recovery twrp.img
Be very careful when entering these commands: if you were to replace “recovery” with “bootloader,” it might try to overwrite the boot partition with an image that’s too large. Then when you reboot, there’d be no valid bootloader and it’d time to break out the backups.
Depending on your ROM, you might issue commands like the above for:
- the bootloader,
- the recovery image (a sort of minimal system like when using the Windows Advanced Boot Options or Safe Mode),
- and the system itself.
Otherwise, the recovery image provides tools to update the device’s ROM. At this point you should have little trouble copying ROM files to your device, flashing them with the recovery tool, and testing them out.
Pro Tip: Find a tutorial for your exact device model and copy/paste the commands carefully. For more precision (and confidence), look for two tutorials with similar content, and use the easiest to understand.
Do You Still Want to Brave a Tablet Upgrade?
While the rewards of installing a fancy new tablet OS are tempting, bear in mind that it’s a delicate process. There is very little hand-holding when it comes to these operations, and just about anything you do above will cancel your warranty.
However, if you want to delve in, XDA Forums are great place for Android owners to start. They’ve got specific steps on rooting, unlocking, and more, for just about any Android device model. iOS users can head to the Jailbreak and iOS Hacks forum over at MacRumors, another information goldmine.
Are you ready to dive into the world of custom ROMs and unlocked devices? Have you been burned by an upgrade process gone wrong? Let us know if you think it’s all worth it in the comments below!
Recovering deleted data from a hard drive is generally possible because typically the actual data is not deleted. Instead, information about where the data is stored is removed. In this article I will explain how data is stored on a hard drive, what happens when files are deleted, what formatting a hard drive does, and why it is impossible to recover files after they were overwritten.
The article outlines how data is stored on the physical level, which is essential to understanding why it can not be restored after being overwritten. If you are interested in the organizational structure of a hard drive, i.e. how the storage of files is managed, please read the article What A File System Is & How You Can Find Out What Runs On Your Drives. For more information on how to recover deleted files, see the resources at the bottom of this article.
How Is Information Stored Digitally?
Digital information is stored in bytes. Each byte contains 8 bits. Each bit has a value, which is either 0 or 1. This way of storing data is called the binary numeral system as it uses two symbols, i.e. 0 and 1. Subsequently, any data stored on a computer is written in the binary code, which is a string of 0s and 1s.
How Do Hard Drives Store Information?
Information on hard disk drives (HDDs) is stored magnetically and non-volatile, meaning no power is required to maintain the stored information. Every magnet has a plus (+) and a minus (-) pole, which equals two values and thus allows it to represent the binary code. The HDD storage unit or platter contains a ferromagnetic surface, which is subdivided into small magnetic regions, called magnetic domains. HDDs store data by directional magnetization of magnetic domains. Each magnetic domain can be magnetized in one of two possible directions and subsequently represents one of two values: 0 or 1.
There are two different technologies for recording data on a HDD. Prior to 2005, the recording layer was oriented parallel to the disk surface (horizontally), meaning the binary code was represented by directional left vs. right magnetization (longitudinal recording). At around 2005 a new technology was introduced and data was written by magnetizing segments vertically, i.e. up vs. down (perpendicular recording). This allowed closer magnetic domain spacing and also enabled larger storage capacities.
How Is Data Stored In Random Access Memory (RAM)?
Essentially, data is stored the same way as on a hard drive, i.e. in binary code. The major difference is that this type of storage is volatile, meaning any stored information is lost as soon as power is removed. A RAM is made up of integrated circuits, which in turn contain capacitors and transistors. Each capacitor stores one bit of data. The state of the capacitor can either be charged or discharged, i.e. 1 or 0, representing the binary code.
What Happens When Data Is Deleted?
In a RAM module, the organizational structure is very flat. When data is removed from memory, the actual information vanishes instantly. Also, when power is lost, the capacitors quickly discharge and hence all information is lost.
The situation on a HDD is completely different as information is stored in two ways. First, data is stored physically on the magnetic hard drive. Secondly, all stored data is managed by a file system, which creates an information table revealing the exact location of data, i.e. where on the hard drive a certain file is stored. This is necessary because one file can be stored in different locations across the hard drive. The operating system then uses this table to locate files and put together the pieces of large files.
When a file is deleted, typically only the information stored in the file system’s table is removed. Since it would take too long to delete the actual file, the physical location of the data remains untouched. When the operating system wants to store new files, however, it consults the table for available space. Since the location of the deleted files was marked as vacant, the operating system may then write new data over the old data, which terminally deletes that information.
For details on how the file system works and how it organizes and manages hard drives, see my article What A File System Is & How You Can Find Out What Runs On Your Drives.
What Happens When A HDD Is Formatted?
The type of formatting that most users are familiar with is called high-level formatting and it is the process of setting up an empty file system. Since it does not require scanning the hard drive for defects, it is also called quick formatting.
Typically, data stored on the hard drive is not physically deleted during formatting. What does happen is that the file system is set up from scratch, meaning the hard drive is re-organized and the table with information where files are stored is reset. As long as the file system and its settings remain the same, none of the actual data previously stored on the hard drive is deleted or overwritten and can subsequently be recovered.
What Happens When Data Is Overwritten?
When data is overwritten, the magnetic domains on the HDD are re-magnetized. This is an irreversible process that physically removes information previously stored in this location. While some residual physical traces of the changes (or none changes) in magnetization potentially remain, which may theoretically allow a partial restore, this would require the use of a magnetic force microscope or similar technologies, none of which have been shown to recover data successfully so far [although you never know what’s going on in secret government intelligence labs]. So in essence, there is no software or other technical way known to the public that can restore overwritten data.
Do you need to recover data that has not been overwritten, yet? Please check out these resources:
- How to Recover Data from a Corrupt Memory Card or USB Drive
- How to Scan a Reformatted Hard Drive to Recover Files
- 3 Remarkable File Recovery Tools
- How To Recover Deleted Files From Your Linux System
- How To Repair Damaged CD’s Or DVD’s & Recover Data
- How to Recover Deleted Pictures from a Digicam Memory Card
- How To Get Data Off A Dead Hard Drive
Many more great resources can be found in reply to these questions posted on MakeUseOf Answers:
- How can I recover deleted files in Windows?
- How can I recover data that were shredded in Windows?
- How can I recover data from a corrupted USB drive folder?
- How can I recover data from a broken microSD card?
- Is it possible to recover data from a broken CD?
- How can I recover data from a faulty USB external hard drive?
What are your data storage and recovery nightmares? Did you ever lose files after accidentally deleting them?
The war between the “PC master race” and console fanatics rages on. More often than not, it comes down to price. But is it suntil cheaper to build a gaming PC than buy a video game console? Yes, but with some caveats.
The simple act of buying one or the other shows that a console will cost as much, if not less than a PC. But when you consider the long-term life for gaming, the PC master race has a valid point.
What We Looked At
We had a clear idea in mind. This article is about gaming, and gaming alone. Sure, a gaming rig double as a wonderful all-around PC. But similarly, the Xbox One and PlayStation 4 are excellent media players.
In another such scenario, the consoles need a high-definition TV, preferably with 4K resolution and HDR. But you’ll also need a high-quality monitor for your gaming PC.
There are other such trade-offs on the non-gaming sides of both PC and console, but we won’t be considering those here. Usually, they balance each other out and depend on the user’s needs, so let’s concentrate on the gadget’s gaming capabilities alone.
What’s the Real Cost of a Console?
Currently, there are a few variations of the two major consoles available online:
- PlayStation 4 Slim: $250
- Xbox One S: $260
- PlayStation 4 Pro: $400
- Xbox One X: $500 (releasing soon)
All of these packages come with a 500GB or 1TB hard drive, an HDMI cable, and a wireless controller. The two cheaper options even throw in a free game.
But what you don’t see in that price tag is the hidden cost of online play. Both consoles require a subscription to play games online. The Xbox Live and PS Plus services cost about $60 per year each.
Pundits expect the next generation of the PlayStation and Xbox consoles to launch in 2020 or 2021. That’s another 3-4 years of subscription costs.
So for the real cost, add $240 to your console’s price tag.
What’s the Real Cost of a Gaming PC?
A gaming PC is going to cost as much as you want it to cost. The advantage of building your own rig is that you can go as high-end or cost-effective as you want.
In fact, the one major price tag that’s cut is the cost of playing online. You don’t have to pay any extra fees for multiplayer gaming on PCs.
The hidden cost on a PC is Windows. Yup, you will likely have to pay for a new version of Windows, and a legal and cheap Windows 10 purchase sets you back by about $90.
So add $90 to your gaming build’s total price, or include Windows while configuring it.
For Casual Gamers, It’s a Tie
Currently, the PS4 Slim and Xbox One S cost around $250. It’s hard to beat that with a PC, especially for casual gamers. These consoles will likely support new games for their entire life, which should be roughly until 2021. The PS4 Slim also supports virtual reality (VR) gaming with the PlayStation VR headset.
But this is for casual gaming only, and without online play. Remember, consoles require an additional PS Plus or Xbox Live purchase to play online. That’s an extra $240 until 2021. You’re almost paying for another console!
You pay nothing to play online on a PC, of course. And that’s where PC gaming has a major edge, especially for casual gamers. For example, let’s say all you want is a gaming rig to play strategy games and MMOs like Overwatch. You can build a good PC for the same $250 or less, including an 8-core gaming PC for $200, which will play Overwatch perfectly for years.
The consoles are better when you consider first person shooters, racing, sports, and other such games. The $250 PC won’t handle these well, while the consoles will be fine until 2021.
In the end, it depends on what kind of a casual gamer you are.
- If you want to play new titles sporadically and aren’t bothered about online gameplay, get one of these cheap consoles.
- If you want to play strategy games or MMORPGs, but aren’t bothered about new titles, a PC will cost you less in the long run.
In case a console seems more up your alley, ask this question to decide between PS4 and Xbox One.
VR Gaming and Its Cost
This is a tricky period in the gaming industry. We are just entering the age of VR gaming, so no one is exactly sure what you’ll need or not need in the future. But let’s look at what you can buy right now:
- Oculus Rift and Touch: $400
- HTC Vive (with controls): $400
- PlayStation VR kit (with camera and Wand): $450
The Xbox One currently does not support any VR headset, so it’s out of the picture in this assessment.
That puts the true cost of the PS4 Pro ($400), the PlayStation VR ($450), and PS Plus ($240) at about $1,100. Without getting into a debate about Rift vs. Vive vs. PS VR, let’s talk about a gaming rig to rival the PS4 Pro and PS Plus.
Basically, subtract the $400 for an Oculus Rift or HTC Vive from the $1,100 above, and we have $700 to build a PC capable of playing VR games. Let’s build a system!
Comparing Costs: Gaming PCs and Consoles
As usual, we use PC Part Picker to look up prices and automatically build our PC. It’s the best site for such geeking out, and supports local e-retailers from several countries.
Best Budget Gaming VR PC: $575
PC Part Picker’s staff recently published a Budget VR Gaming Build, which costs $535 without Windows 10. Add in the $90 for the operating system and this is a solid $625 budget VR gaming PC.
We built a similar rig with a more powerful processor and with Windows 10 for $575. Both configurations use the Nvidia GeForce GTX 1050i, which is an excellent choice to build a cheap gaming PC with an Nvidia graphics card.
Recommended VR Gaming PC: $735
The Oculus Rift and HTC Vive have a set of recommended specifications for PCs. By this, the companies mean you won’t face any performance loss, and you’ll get the experience they intended.
With that in mind, we built a rig and added in a few extras that we think are worth it. The end result for the best experience is above our budget by $50, but remember, it’s a customizable PC. To fit the $700 budget, you can easily cut back on a few items, like the DVD writer.
And to be fair to this rig, its real competitor is the Xbox One X and its 4K gaming. The rest of the consoles, including the PS4 Pro, can’t match up to the video quality here.
A Few Things You Can Change
- Feel free to swap out the HDD for an SSD. You’ll get improved performance, but about a quarter of the storage space for the same price.
- A DVD writer is not necessary any more. Most people buy games online. But the last time we didn’t add this in, our readers were quite vociferous in their disapproval.
- If you prefer to buy a PC off the rack than build your own, check this list of the best VR-ready gaming PCs.
What About the Games?
Content is king, they say. What’s the point of having an excellent PC gaming rig or a console if it doesn’t have the games you want to play? And what about the prices of the games, where AAA titles cost $50 and more?
Largely, you can’t differentiate much in the prices between consoles and PC. At launch, the difference is negligible or non-existent. So for early adopters, you are paying the same no matter what device you use.
Over time, PC games do get cheaper though, and you can save money on gaming through Steam sales and bundles. But there is a large market of stores that rent PS4 and Xbox One games. Overall, how much you pay for games on consoles or PCs will even out as long as you are smart about it.
As for the availability of games itself, PC has a slight edge here. It has a larger variety of games than the consoles. However, the consoles have plenty of exclusives, and those are usually the best games of the year. Top Ten Gamer has an amazing list of PS4 vs. PC vs. Xbox One games where you can see a comparison of what is available on which platform.
As for VR, the Oculus Rift has the most number of VR titles currently available and coming up. Plus, what kind of a gamer are you if you aren’t supporting Oculus CTO John Carmack, the papa of modern gaming and the creator of Doom, which is also on the Rift.
Do You Think Consoles Are Cheaper?
In conclusion, serious gamers are better off with a PC gaming rig for now. You get better hardware at the same price, easy future upgrades, better VR compatibility, and VR games. Meanwhile, consoles are best left for casual gamers.
Yet, not everyone agrees with this. Harry thinks consoles are cheaper than PCs for gamers, and there might be some readers in his corner.
What do you think? Do you get a better deal with a PC or a console for gaming?
Image Credit: muchmania/Depositphotos, Nikitarama via Wikimedia Commons
A dongle is a small device, typically in the shape of a USB flash drive, that plugs into another device and provides extra functionality. A wireless dongle, also called Wi-Fi adapter, is a thumb-drive-looking device that provides Wi-Fi capabilities to a device that otherwise isn’t Wi-Fi-capable, such as a desktop PC with no wireless network card.
Dongles are generally useful because they can be easily moved between devices, they don’t take up much space, and the added functionality is convenient (e.g. a Roku Streaming Stick lets you stream thousands of services directly to your TV).
But when using a wireless dongle, you may run into some issues — in particular, poor wireless speeds that don’t live up to what your ISP plan can deliver. Here are some reasons why you may have subpar wireless dongle performance and what you can do about it.
1. Wireless Interference
Wi-Fi devices can communicate using two different bands: the 2.4GHz band, which is older and supported by most devices but slower, and the 5GHz band, which is newer and faster but has a shorter range and is only supported by devices from the past few years.
While modern wireless dongles tend to support both bands, you can only utilize the 5GHz band if your router also transmits on the 5GHz band. If your router isn’t a dual-band router, then you’re stuck using the 2.4GHz band. This is why dual-band routers are essential.
What’s so bad about the 2.4GHz band? Well, it’s extremely narrow. In the U.S., you only have 11 channels to choose between — and even that’s deceptive because each channel’s frequency overlaps with the frequencies of neighboring channels. This means that channels 1, 6, and 11 are the only non-overlapping channels.
Overlapping channels are bad because the wireless data waves can interfere with each other, causing lost data packets that need to be resent. Resending data packets takes time, and this can cause your wireless speed to drop. With a lot of interference, the drop can be significant.
It gets worse. If you live in a densely populated building, such as an apartment complex in a major city, then you have hundreds of devices all around you trying to transmit Wi-Fi data. Even if you’re using a non-overlapping channel, transmissions on the same channel can interfere. A wireless dongle on the 2.4GHz band simply has no chance to perform well.
If you have to use 2.4GHz, make sure you’re using the newer N mode instead of “legacy” or “mixed” mode, which is limited to 7MB/sec for backward compatibility.
The best solution? Switch to the 5GHz band.
This means you’ll need to upgrade your router to a dual-band model if your current router doesn’t support it. You’ll also need a wireless dongle that’s capable of it. Fortunately, the 5GHz band has 23 non-overlapping channels and many devices still don’t support it, so interference is minimal. Learn more about ways to solve wireless dongle interference.
2. Internal Antenna
Wireless dongles come in two forms: compact ones (which have internal antennas) and bulky ones (which have external antennas).
Compact wireless dongles, sometimes called nano dongles or pico dongles, are preferred by most users because they’re tiny, portable, and more aesthetically pleasing. Who wants a massive antenna sticking out of their device? Nobody, that’s who! Plus, internal antennas are cheaper to produce so compact dongles are more affordable.
While internal antennas have come a long way and aren’t terrible anymore, external antennas generally provide better performance. External antennas often have higher gain and therefore better signal reception. You can point them towards the router for even better reception, and they aren’t as close to internal electronics (which can cause interference).
The best solution? Upgrade to a dongle with an external antenna.
To be fair, there’s nothing wrong with using a compact dongle with compact devices (e.g. Raspberry Pi). Just be aware that you probably won’t get full Wi-Fi speeds. A dongle with an external antenna may be ugly, but is often the more performant option.
3. Hardware Bottlenecks
There are at least three specifications you need to pay attention to.
First, the dongle’s specifications. A dongle labelled as 600Mbps probably doesn’t support that much throughput per band. Instead, it might be 150Mbps on 2.4GHz and 450Mbps on 5GHz, for a total of 600Mbps when both bands are used. Be sure to get a dongle that lives up to your ISP plan’s max speed on the band you’re going to be using.
Second, the USB port you plug into. USB 2.0 ports have a theoretical max speed of 480Mbps, but due to protocol overhead and hardware inefficiencies, the practical max speed is closer to 320Mbps. If you want greater data throughput, be sure to plug the dongle into a USB 3.0 port, which has a theoretical max speed of 5Gbps (faster than any modern residential connection).
Third, your maximum internet speed. If you’re paying for 25Mbps/5Mbps, then no combination of router and dongle will get you faster speeds. And most ISPs don’t actually provide your plan’s full speed 100 percent of the time, so you may need to upgrade to a plan that’s even higher than what you think you need.
Other Ways to Boost Wi-Fi Performance
If you’ve tried all of the above tips but still experience Wi-Fi performance issues, we highly recommend this article on common reasons why your Wi-Fi is slow and things that could be slowing down your home network. It may not be your wireless dongle after all!
If your Wi-Fi issues stem from distance, such as your router being stuck at the other end of the house, then you should consider increasing your wireless reach using a Wi-Fi extender or powerline adapter. If your speed issues stem from too many users on the network, try using these network-optimizing tips for home routers.
What kind of wireless dongle are you using? Know of any other Wi-Fi performance tips we missed? Share with us down in the comments below!
If a brand new iPhone, Samsung Galaxy or Nokia Lumia is beyond your budget and you don’t want to tie yourself to a contract, buying a used handset is often the popular – and obvious – choice.
However, buying a phone this way can be laced with risk and danger. You might end up with stolen goods at best; at worst, a handset full of illegal, illicit or confidential material. If your phone was previously the property of someone else and it was stolen from them, you will probably encounter problems using it.
What follows is designed for smartphones, but you might also apply it to tablets, particularly those with mobile Internet facility.
Where Did You Buy Your Phone?
If you’re reading this then the chances are that you suspect your phone is stolen.
The first thing you need to do is consider where you bought the device. If you exchanged cash (or other hardware) with a friend, it might be a good idea to check with them. What is the background to this phone? How did they come by it, and has it been reliable?
Meanwhile, if you paid someone on Craigslist for the device, you should certainly be making every possible check to confirm that the phone works, is usable and most of all, isn’t stolen. After all, there are plenty of scammers on Craigslist.
Even eBay and Amazon can be unreliable; perhaps the best source for second-hand phones these days are specialist used consumer hardware stores, equipped to run the necessary tests and recondition devices if necessary.
Does The Phone Work?
If the phone works, you’re home and dry; if you can’t place a call or your SIM isn’t accepted, however, then you can probably write the purchase off as a failure. At best, you’ll be able to sell it on as a spares unit – someone might need a new display for a similar device.
Setting a phone up with your SIM card should be easy enough – but with a stolen device it might result in a message such as “There was a problem with the MEID/ESN provided” indicating that the handset has been blocked.
The only way a handset can be blocked is by the network, following a call from the user to report the device as stolen.
Are The Accessories Genuine?
If you’re looking for more evidence that your cool new phone is in fact hot, look no further than the accessories. A handset plucked from a table in a bar or restaurant isn’t likely to have all of the extras such as a charger, cables, headset and case.
A thief planning to sell the smartphone for as much as possible might well buy low-grade accessories in order to create the illusion of a “full set” when in fact the only genuine device is the phone.
This isn’t a rule of thumb, of course, but if a phone you’re interested in is being sold with non-genuine accessories, you should ask why.
Has The Device Been Wiped?
Most stolen smartphones have – sensibly – been wiped. But doing so usually requires the thief to be able to access the password. If there is no password or a default code has been set, then this won’t be difficult.
You can get an idea as to whether the phone has been stolen or not by checking whether any photos, videos, music or even documents and web page history are stored on it. While it is very wise to wipe a phone before selling it, not everyone does, so this isn’t a guaranteed indication that the handset is stolen when considered on its own. However, if you find a list of contacts or other data and you already have your suspicions about the phone, perhaps it’s time to get the phone checked out properly. Additionally, any social networking services that are running might be a giveaway – although they will also help you to identify the owner.
If you connect a phone that you suspect to be stolen to your computer to check the contents, be careful with what you open – there may be malware waiting.
Check Your Hardware’s Past
How do reconditioned hardware stores know that they aren’t receiving stolen goods? Usually by having access to a database of smartphone IMEI numbers that have been blocked.
You can also have access to this database using the CheckMEND website, where for just $2.99 (£1.99 in the UK) you can generate a report of your smartphone’s history.
This useful service will help you to find out whether or not your phone is stolen. Sellers on eBay can also use CheckMEND to confirm to potential buyers that their device is legitimate.
Incidentally, your country or region might have a property database that can be used to trace and disable stolen hardware. In the UK, the Immobilize system is supported by the majority of police forces. This system requires that your device IMEI number is scanned into the world’s largest free database of ownership details, enabling stolen hardware to be returned. If you suspect your phone was lost by or stolen from a previous owner, returning it is the decent thing to do.
Conclusion: Purchase Only From Trusted Sources!
There is a huge second-hand market in smartphones, with eBay, Amazon and even bricks and mortar video game stores getting in on the act. As long as the devices have been tested as being genuine as opposed to stolen, you shouldn’t have any problem; if any of the things listed above shows up after you’ve purchased the device, you can at least return it.
However, you should avoid making a purchase via a site like Craigslist, as there is no way to genuinely confirm the phone until you buy it – and by then it might be too late.
Have you been the unwitting recipient of a stolen phone? Were you able to get the device unlocked? Tell us below!
Normally the computer processor in your laptop or desktop has a standard clock speed which partially determines how quickly it performs. While the processor might lower its clock speed at times in order to conserve power, the clock speed which is stated when you buy the computer is the fastest clock speed you’ll receive unless you decide to overclock.
If you do decide to overclock, or you ever speak to someone who regularly overclocks processors, you’ll discover a dirty little secret – the clock speed a processor ships at is typically much lower than the actual maximum clock speed which the processor could achieve.
The extra headroom isn’t used only because the manufacturer (Intel or AMD) needs to plan for worst case scenarios, which means they need a processor which is sold as a 3GHz processor to work at that speed even if someone decides to use a winter jacket as a PC case.
At least, that is how processors used to be. However, Intel’s new Core i5 and Core i7 processors have a feature called Turbo Boost which has the ability to dynamically scale up the clock speed of a processor depending on the thermal headroom available.
How Intel Turbo Boost Works
Intel Turbo Boost monitors the current usage of a Core i5 or i7 processor to determine how close the processor is to the maximum thermal design power, or TDP. The TDP is the maximum amount of power the processor is supposed to use. If the Core i5 or i7 processor sees that it is operating well within limits, Turbo Boost kicks in.
Turbo Boost is a dynamic feature. There is no set-in-stone speed which the Core i5 or i7 processor will reach when in Turbo Boost. Turbo Boost operates in 133Mhz increments and will scale up until it either reaches the maximum Turbo Boost allowed (which is determined by the model of processor) or the processor comes close to its maximum TDP. For example, the Core i5 750 has a base clock speed of 2.66GHz but has a maximum Turbo Boost speed of 3.2GHz.
However, Intel still advertises these processors by their base clock speed. This is because Intel does not guarantee that a processor will ever hit its maximum Turbo Boost speed. I have yet to hear of an Intel processor which can’t hit its maximum Turbo Boost speed, but hitting the maximum Turbo Boost is dependent on workload – it won’t happen all of the time.
Why Turbo Boost Rocks
Despite Turbo Boost’s lack of predictability, it is still an excellent feature. It provides a solution to the problem of compromising between dual and quad core processors.
Before Turbo Boost the choice of purchasing a dual core or quad core processor was a compromise. Dual core processors were clocked faster than quad core processors simply because having more cores increases power consumption and heat generation. Some programs, like games, favored dual core processors, while other programs, like 3D rendering software, favored quad cores. If you used both types of applications you had to make a choice about which was most important to you. You couldn’t receive maximum performance in both from a single processor.
Turbo Boost gets rid of this compromise. If you use the Core i5 750 in a 3D rendering application it will probably only operate at its base clock speed because all four cores will be used. However, if you use the Core i5 750 with a game which only needs two cores – presto! – the third and fourth cores go into a low power state and the two cores you’re actually using are running at a clock speed as fast as what you’d expect from a standard dual core processor.
The Future of Intel Turbo Boost (and AMD’s Response)
Turbo Boost is a great feature, and it is part of the reason why Intel’s latest processors are often superior to those from AMD. However, there is still more potential to be tapped. By the end of 2010 Intel will have released ultra-low voltage Core i5 and i7 processors for laptops. These processors will use Turbo Boost as a way of improving battery life.
For example, Intel will be releasing a processor called the Core i7 620UM. This processor has a base clock speed of only 1.06GHz. However, it has a maximum Turbo Boost of 2.133 GHz. What we will end up with is a processor which will run at only the base clock when on battery but can double its speed when plugged in.
Intel’s success with Turbo Boost has not gone unnoticed by AMD, however. With the release of the six-core AMD processors, such as the Phenom II X6 1090T, AMD has introduced a similar feature called Turbo Core. Turbo Core isn’t as advanced as Intel’s Turbo Boost, but it is a clear sign of the direction processors will be taking in the future.
It appears the days of set-in-stone processor clock speeds are over. The future will be about changing a processor’s performance on the fly to meet the demands of the user.
Did this article help you understand more about Turbo Boost and why you need it? Still not sure about something? Go ahead and get it answered in the comments.
Have any of your devices ever displayed an error message pertaining to an IP address conflict? If so, you probably found yourself unable to connect to the Internet, either because you simply don’t have access or the connection has just been rendered unusable.
Although it’s not something that commonly occurs, IP address conflicts are a very real issue and can very much inconvenience the user. When two or more IP addresses conflict, the result can be one or more computers or devices that have been rendered completely useless in terms of network connectivity. Fortunately, there are ways to resolve the issue when conflicts occur.
What is an IP conflict?
IP conflicts occur when two or more computers or devices (like a tablet) in the same network end up being assigned the same IP address. An IP (Internet Protocol) address is your computer’s unique identifier, comprised of a string of numbers, such as 192.168.8.4. Without one, you can’t connect to the network. Usually a sort of warning or error message will pop up, alerting you of the problem. Sometimes these issues resolve themselves, but that’s not always the case.
Usually, IP address conflicts occur on a LAN (local area network), although they may also be seen between multiple devices connected to the Internet. Any device that has an IP address could potentially have a conflict with another device.
An IP address can be either static or dynamic. A static IP address never changes and is manually assigned. Dynamic IP addresses, on the other hand, are only temporary and a new one is assigned every time your computer or device connects to the Internet or your router.
Conflicts can happen with both static and dynamic IP addresses, although they are less likely to occur with static addresses today, because typically a DHCP (dynamic host configuration protocol) server, which is built into most routers, is used to manage and assign IP addresses. DHCP servers have a pool of IP addresses, called a scope, and from that pool addresses are assigned to devices in response to a system request for an IP address.
Why Does It Happen?
An IP conflict can happen for various reasons, but a classic example is when two or more systems are assigned the same static IP address. This doesn’t happen as much now thanks to DHCP servers. However, if you have more than one DHCP server running (which you shouldn’t), similarly configured servers may hand out identical address to multiple devices.
If you have another device that facilitates a network connection, it may have an embedded DHCP server that is turned on by default. In this case, going in and switching off the server will resolve the issue.
An ISP may also assign multiple customers the same address. If a device is set up with multiple network adapters, then it could potentially experience an IP address conflict with itself.
A conflict may also occur if a device originally connected to one network enters standby mode and later comes back on, but connected to a different network with a device that has the same IP address. This could happen with a work laptop being brought home, or even when traveling with a laptop or another device.
How Do You Resolve It?
IP conflicts sometimes just work themselves out, but that could take a while, if it even happens at all. Resolving the issue could be as simple as just restarting the router. If every device in the network was assigned dynamic IP addresses, the issue should be fixed as the router reboots and re-assigns IP addresses to every device in the network.
Another way to fix this issue (at least on Windows systems) is to release the IP address via the command prompt. Open up the command prompt (you can do this by going to your start menu and searching for ‘command prompt’) and type “ipconfig /release” in the window, then press enter. The DHCP server will then assign a new IP address to your computer.
If that doesn’t work, you will need to identify the conflicting addresses in the router’s administrative dashboard and either manually assign new IP addresses, or configure the devices to automatically obtain IP addresses.
Finally, if none of this works or you encounter this frequently, your router may have a faulty DHCP server. In that case, you would have to upgrade the firmware. You can usually get a firmware update from the manufacturer’s website, which can be installed manually.
As we connect more and more devices to our networks and the Internet, the likelihood of experiencing some sort of IP conflict goes up. While it isn’t something that one can expect to encounter every day (and if you do deal with this frequently, you should upgrade your firmware), it is good to understand the issue and know what steps to take to resolve it.
Have you ever encountered an IP conflict? Do you know what caused it? How did you deal with it? Leave a comment below and tell me about it!
Image credits: DHCP by Rodrigozanatta via Wikimedia Commons
We’ve seen so many changes over just the last couple of years that it’s getting rather difficult to keep up with everything. However, you can still get the jist that everything is moving towards the web, which is now more commonly being dubbed “the cloud” (except that it doesn’t rain on you).
As such, your devices should probably be ready and well equipped to make full use of cloud services for your convenience. However, our big and slow desktops and laptops still have many unnecessary components from our long computing past. At least, that’s what Google says.
Google’s Approach with Computers
Google has decided to take a different approach with computers, right down to the hardware. As computing is moving to the cloud, where Google is a major player of cloud services with Gmail, Google Docs, and much more, computers should depend less on data stored on the machine itself but rather put all the data in the cloud.
Plus, most of a modern user’s activities is online, where a browser is used to surf around Facebook, play Flash games, and more. Rarely do they touch other applications, especially any that cannot be replicated through online cloud services (such as Microsoft Office –> Google Docs). With that logic, Google came up with the Chromebook.
A Chromebook is just like a small laptop, with some key differences. It is relatively thin, and to the untrained eye doesn’t seem to have an operating system. Yes, you heard right, there’s no obvious operating system. Of course there is one, else the device wouldn’t work, but there isn’t a Start Menu or anything else that you recognize as part of your operating system. Instead, all you get is a nice login screen and a browser. That’s it.
Benefits of Chrome
As it’s Google’s device/idea, the included browser is obviously Chrome. Honestly it isn’t a bad thing, as Chrome’s userbase is growing at an exponential rate. Just look at this graph if you don’t believe me! Also, Chrome’s focus on speed and overall performance is a plus for the low powered device (which still sports a dual-core Intel Atom processor).
That’s it; it’s just you and the Chromebook with literally only Chrome on it. While it isn’t meant to replace all your computers (and I don’t see that happening anytime soon), it is a good replacement for netbooks and laptops for those who just use the Internet anyways.
Hardware and Firmware Differences
Because the devices are built for the web (cough, Chrome), Google engineers have taken a lot of effort to take away a lot of system processes that each traditional computer carries out that, in this case, are unnecessary. This includes checking for devices to see which one to boot from and so on. Instead, all these checks are taken out so that the device is ready to log you in after just 8 seconds of booting. And the device is also supposed to wake up instantly.
The Samsung models of the Chromebook get around 8.5 hours of battery life, which should be enough to get you through the day.
Go Worry-Free with Automatized Updates
Also in line with this simplistic approach, Google makes sure that any behind-the-scenes jobs such as updates and corruption protection are taken care of automatically and transparently. Since everything a user does on a Chromebook is stored online, Google emphasizes the ease of simply using a different device (whether Chromebook or not) to get back to the data you’ve always had in front of you in case your Chromebook breaks, gets stolen, or worse.
Pricing and Alternatives
You can see the speed of the Chromebook in the video below, which almost instantly makes me want to go get one, but sadly I don’t have $430 to spend on a Wi-Fi only model just yet (or $500 for a 3G model where you only get 100MB/month for free for two years).
However, I’m doing some research where it may be possible to install the operating system that is on Chromebooks onto your own laptop or netbook, but I need to find out more first. A safer, sure-fire way to get a similar experience is to install JoliOS, which happens to be installed on the other computer used in the comparison video below.
There isn’t much else to say about these little devices, as Google wanted to make them as simple as possible with the “Boot and Go” approach. They seem to be a nice, practical tool for the average user once they get to understand that there are online versions of Word, PowerPoint, and Excel. While this isn’t the perfect device for everyone, it can still be helpful for plenty.
What do you think about Chromebooks? Will they eventually become a tool that everyone can use for anything? Let us know in the comments!
Bluetooth is the forgotten star on the device specifications sheet. While it isn’t going to turn any heads, every once in a while you forget just how valuable it is; then remember why it’s there in the first place.
It’s not a feature we talk much about anymore, but the standard is improving, and it’s getting to the point of becoming extremely relevant in the near future.
With Bluetooth 4.0, it’s not so much the technology as it is the leap forward that gets people like me excited. Each iteration of Bluetooth has brought it one step closer to becoming a technology that bridges the gap between the current state of connectivity and a true, always-on society. Today, it’s time to look at what it is, and some of the reasons it’s worth getting legitimately excited about.
The Reasons We All Hate Bluetooth
By now we all know what Bluetooth is, but if you’re after a more thorough overview, we have a comprehensive description of Bluetooth from way back in 2009 that’s just as true today as it was back then.
Instead of reliving the basics, let’s instead focus on why it was never really all that exciting.
For one, Bluetooth has a relatively short range. It was never intended to be a WiFi replacement and this much was evident in its dismal speeds (3Mbit/s until version 3.0+HS).
Other frustrations stemmed from its propensity for interference from other devices in the 2.4GHz range, and the rage-inducing (and often completely random) incompatibility issues faced when it comes to pairing some Bluetooth-enabled devices.
In short, Bluetooth has never been all that sexy due to some fundamental flaws in the ways previous iterations worked – or didn’t work.
Bluetooth Versions Compared
While nearly extinct now, Bluetooth 1.x provided the framework for each of the newer generations. It featured very basic capabilities, such as a theoretical 1Mbps data transfer rate (which actually maxed out around 721 kbit/s) and some severe issues with connectivity.
Some of the major flaws in the technology were the use of version 1.0 and 1.0B as separate standards, which made universal connectivity nothing more than a pipe dream. In addition, anonymity was impossible due to mandatory Bluetooth hardware device address transmission in the connection process.
Bluetooth 1.1 and 1.2 were slightly better. Bluetooth v1.1 fixed most of the connectivity issues in the 1.0B specification and boosted signal strength. Version 1.2 allowed for adaptive frequency-hopping (AFH) which improved some of the problems with 2.4GHz device interference. Additionally, we saw the Extended Synchronous Connections (eSCO) which re-transmitted corrupted data packets for better transmission quality.
Most of us had never heard of Bluetooth until the 2.0 standard released in 2004. While the connectivity issues remained; Bluetooth 2.0 featured the introduction of EDR (Enhanced Data Rate) for significantly faster data transfer speeds. Theoretical speeds of EDR were hyped at about 3Mbit/s, but the reality was a top transfer speed of around 2Mbit/s.
It took three years for the next version of the technology – Bluetooth v2.1 + EDR – to come to fruition. The milestone feature in v2.1 was SSP (Secure Simple Pairing) which aimed to improve the process of connecting devices to one another over Bluetooth. With SSP came an additional technology, EIR (Extended Inquiry Response), which allowed for better filtering of Bluetooth-enabled device.
In theory, it was attempting to remove most of the haystack before searching for the needle. While it helped, you ultimately still found the needle by accident. That is to say, a successful connection wasn’t planned, but it was always a happy occurrence, even if there was a little pain involved.
The Bluetooth SIG (Special Interest Group) adopted version 3.0 in early 2009. While there were numerous technological advancements, the most important of these were the HS (High-Speed) standard and the introduction of Enhanced Power Control, which actually forms the foundation for what makes Bluetooth 4.0 so special.
Bluetooth HS uses AMP (Alternative MAC/PHY) to introduce 802.11 (a common WiFi standard) as a method of high speed transfer. Using HS allows Bluetooth transfer rates to exceed 20Mbit/s through use of 802.11 to transfer larger packets.
The discovery, initial connection and configuration all take place using standard Bluetooth technology, but the addition of AMP allows devices to transmit large amounts of data back and forth using 802.11 when needed. Smaller packets and idling systems still use traditional Bluetooth connection technology.
The Enhanced Power Control feature allows for a smart adjustment of power based on perceived needs by the device. For example, a Bluetooth-enabled device (such as a smartphone) has the option to operate at the minimum power level needed to retain a quality connection while increasing the power consumption if you were to move the phone further from the device it’s connected to.
What Does Bluetooth 4.0 Bring to the Table?
Bluetooth 4.0 – or “Bluetooth Smart” – was released in 2010, just a year after 3.0. While the release itself wasn’t all that exciting, the implications that we’re just now beginning to understand after the release of v4.1 and v4.2 are making some realize the potential far exceeds gaudy ear pieces and in-car streaming (or mp3 playback).
The release of 4.0 saw an improvement in connectivity through use of three main protocols: classic Bluetooth, Bluetooth High Speed and Bluetooth Low Energy (BLE). Bluetooth-enabled devices are now smart enough to use the appropriate protocol depending on required device protocol standards, or application-specific need.
Apart from an improvement in connectivity, 4.0 features a number of additional benefits. The most notable of these is the Low Energy protocol (LE).
In days past, Bluetooth provided a notable parasitic effect on devices by increasing the rate at which battery power was consumed. Staying connected, or constantly searching for new devices to connect to, significantly shortened the battery life on most devices. Through the newer low energy technology, Bluetooth devices can remain connected indefinitely without much of an effect on battery life.
Additionally, single-mode Bluetooth chips are cheaper than ever, leading us to a future that could potentially feature throw-away chips for single use applications. While we’re not there yet, this level of affordability opens the doors to a significant number of new uses which could continue to spur mobile device innovation for years to come.
Why It Could Be a Key Piece of the Always-on Future
The low energy mode and 24Mbit/s transfer rate make Bluetooth an ideal solution for seemingly-stalled technologies such as smart wallets and the IoT (Internet of things).
While Bluetooth was once billed as the solution to completely connected devices, battery usage, connectivity and range-related issues offered a bit of a hurdle. With these problems mostly addressed, and improving with each new Bluetooth protocol, the hurdles are starting to appear more like steps rather than obstacles impeding our progression toward a completely connected stable of devices.
As far as the digital wallet solution, Bluetooth offers possibly the most straightforward path to complete digital currency adoption. In fact, many industry leaders in the digital wallet space (such as Apple and PayPal), as well as major retailers are betting heavily on beacon-based technology that makes use of Bluetooth’s LE proximity sensing to transmit (or recognize) a unique identifier.
Put simply, with less battery drain on a device, Bluetooth serves as an always-on technology. Beacons are devices that send a signal to your device when you’re in proximity of the sensor.
Here are just a few of the ways major companies are already using the technology:
Macy’s: Just in time for the holiday shopping season, Macy’s deployed beacon technology in over 800 stores nationwide. The beacons tracked customer movements throughout the store and offered product recommendations and coupons based on their location.
Major League Baseball (MLB): MLB fitted 28 of its 30 ballparks with iBeacons (Apple’s beacon technology) for the 2014 season. Ballparks used the technology to push merchandise coupons, offer seat upgrades (complete with mobile payment option), and a push to install the MLB app for additional special offers, content, and information, such as: stadium maps, seating charts, and player information.
Starwood Hotels: In a show of what Bluetooth is actually capable of, Starwood Hotels took a different beacon-based approached. Rather than offering coupons, Starwood allowed guests to check in from their mobile device (rather than waiting at the front desk) and walk straight to their room and open the door using keyless entry activated via Bluetooth beacon when it comes into contact with your phone. If that wasn’t cool enough, housekeeping also has access to beacon data so they won’t have to disturb you if you’re still in the room.
Is It Worth Getting Excited?
Low energy usage, increased range, improvements in connectivity and the sheer number of devices that already feature Bluetooth 4.0 connectivity makes it an impossible technology to ignore. As for what the future holds, I can only speculate; but I can tell you that Bluetooth could change the way we do just about everything on our mobile devices.
Or, it could prove to be a monumental waste of a real disruptive technology. I guess we’ll just have to wait and see.
What do you think?