This Is Me Challenge: My Memories Suite 3 Winner
Discover more than 56 million tracks, create your own playlists, and share your favourite tracks with your friends. The Suite Life Of Zack & Cody star said his profile was hacked while he was in Tokyo celebrating his anniversary with model girlfriend Barbara Palvin. In addition to the giveaway, MyMemories is offering a special discount for my readers as well.
|1||Teach me to Crack "My Eyes Only" On Snapchat: Hacking||96%|
|2||Your Photos and Videos||29%|
|3||Shirley C: My Memories Suite Giveaway and Review||20%|
|4||The Girl Who Quilts: My Memories Suite Review and GIVEAWAY||33%|
|5||JetBrains Goland 2020.3.1 + Serial Key||80%|
|6||Unblocked Games Hacked Games Google Sites - Pilo Arts||93%|
|7||SPI Tutorial – Serial Peripheral Interface Bus Protocol Basics||37%|
|8||Free my memories suite 3 download (Windows)||13%|
|9||PonyProg: serial device programmer download||56%|
Serial key life in the Motherhood: My Memories Software Review
No need to label or organize any tapes, film, or photos. Yousician Premium Crack Apk Read More. Driv3r was released on Windows.
My Memories Suite Tutorials - Albums to Remember Designs
Its one of those new gen credit cards with NFC payment. Our lower than average rating was due to the monotony and temperature variations of the breakfasts and dinners.
Key generator please input something... - Smart Serials: Your serial
There are a number of twin and double studios for 2, some sleeping 3 people, and one and two bedroom apartments sleeping up to four people. It's like you're living in a book of fables! Error: "The serial number is not valid for this product go to the website.
- Updated 2020: 3 signs your Snapchat account has been hacked
- Imaging Edge Desktop
- Quick Pages Made with My Memories Suite 3
- Adobe photoshop elements 8 free download (Windows)
- How Do I Activate My Account? – Discogs
- Suite Saturday -My Memories Suite Winner!
- Amazon.com: Memory Mixer - Digital Scrapbook Software
Activation code family of Educators: My Memories Suite
- Outta the Way: My Memories: Digital Scrapbooking Give-Away
- PlayMemories Home Help Guide
- Holy Cannoli Recipes: My Memories Suite v3 Review & Giveaway
- Serial Number Download - Smart Serials
- Code Atlas onTV keygen, serial, crack, generator, unlock
- My Memories Suite Operation Manual
- Kozzy_Sims's Studio - Community - The Sims 3
- YAESU FT-2800M OPERATIING MANUAL Pdf Download
Download find Serial Number dot me
Find Serial Keys and Installation Codes for Software. Plus, with a VPN built into the Google One app for Android phones, you can encrypt your online activity for an extra layer of protection. MyMemories Digital Scrapbooking Software and Scrapbook Kits try this site.
Key handy Guide to MyMemories Suite
Samsung Galaxy Book S vs. LG Gram 15. Samsung Galaxy Book S: LG Gram 15: Price (USD) $999 (starting price) $1, 999 (as tested) CPU: 2.84GHz + 1.8GHz Qualcomm Snapdragon 8cx. Filter by license to discover only free or Open Source alternatives. For any problems I can not help, I just put them here for you to download.
Amazon.com: .hack Collection (Part 1: Infection, Part 2
Adobe Creative Suite 2 Standard Windows includes and serial number Free Shipping. Home; Digital Scrapbooking Kits; MyMemories. Buy separately or take advantage of the discounted bundle now available by clicking here!
Peach Street's Blog: My Memories Digital Scrapbooking
My Memories Suite Review & Giveaway The digital age has revolutionized so many things, and scrapbooking is definitely among them. Updated on January 2020 with more new Sexually inappropriate family photos. My memories suite 3 crack.
The fallacy of ‘synthetic benchmarks’
PrefaceApple's M1 has caused a lot of people to start talking about and questioning the value of synthetic benchmarks, as well other (often indirect or badly controlled) information we have about the chip and its predecessors.
I recently got in a Twitter argument with Hardware Unboxed about this very topic, and given it was Twitter you can imagine why I feel I didn't do a great job explaining my point. This is a genuinely interesting topic with quite a lot of nuance, and the answer is neither ‘Geekbench bad’ nor ‘Geekbench good’.
Note that people have M1s in hand now, so this isn't a post about the M1 per se (you'll have whatever metric you want soon enough), it's just using this announcement to talk about the relative qualities of benchmarks, in the context of that discussion.
What makes a benchmark good?A benchmark is a measure of a system, the purpose of which is to correlate reliably with actual or perceived performance. That's it. Any benchmark which correlates well is Good. Any benchmark that doesn't is Bad.
There a common conception that ‘real world’ benchmarks are Good and ‘synthetic’ benchmarks are Bad. While there is certainly a grain of truth to this, as a general rule it is wrong. In many aspects, as we'll discuss, the dividing line between ‘real world’ and ‘synthetic’ is entirely illusionary, and good synthetic benchmarks are specifically designed to tease out precisely those factors that correlate with general performance, whereas naïve benchmarking can produce misleading or unrepresentative results even if you are only benchmarking real programs. Most synthetic benchmarks even include what are traditionally considered real-world workloads, like SPEC 2017 including the time it takes for Blender to render a scene.
As an extreme example, large file copies are a real-world test, but a ‘real world’ benchmark that consists only of file copies would tell you almost nothing general about CPU performance. Alternatively, a company might know that 90% of their cycles are in a specific 100-line software routine; testing that routine in isolation would be a synthetic test, but it would correlate almost perfectly for them with actual performance.
On the other hand, it is absolutely true there are well-known and less-well-known issues with many major synthetic benchmarks.
Boost vs. sustained performanceLots of people seem to harbour misunderstandings about instantaneous versus sustained performance.
Short workloads capture instantaneous performance, where the CPU has opportunity to boost up to frequencies higher than the cooling can sustain. This is a measure of peak performance or burst performance, and affected by boost clocks. In this regime you are measuring the CPU at the absolute fastest it is able to run.
Peak performance is important for making computers feel ‘snappy’. When you click an element or open a web page, the workload takes place over a few seconds or less, and the higher the peak performance, the faster the response.
Long workloads capture sustained performance, where the CPU is limited by the ability of the cooling to extract and remove the heat that it is generating. Almost all the power a CPU uses ends up as heat, so the cooling determines an almost completely fixed power limit. Given a sustained load, and two CPUs using the same cooling, where both of which are hitting the power limit defined by the quality of the cooling, you are measuring performance per watt at that wattage.
Sustained performance is important for demanding tasks like video games, rendering, or compilation, where the computer is busy over long periods of time.
Consider two imaginary CPUs, let's call them Biggun and Littlun, you might have Biggun faster than Littlun in short workloads, because Biggun has a higher peak performance, but then Littlun might be faster in sustained performance, because Littlun has better performance per watt. Remember, though, that performance per watt is a curve, and peak power draw also varies by CPU. Maybe Littlun uses only 1 Watt and Biggun uses 100 Watt, so Biggun still wins at 10 Watts of sustained power draw, or maybe Littlun can boost all the way up to 10 Watts, but is especially inefficient when doing so.
In general, architectures designed for lower base power draw (eg. most Arm CPUs) do better under power-limited scenarios, and therefore do relatively better on sustained performance than they do on short workloads.
On the Good and Bad of SPECSPEC is an ‘industry standard’ benchmark. If you're anything like me, you'll notice pretty quickly that this term fits both the ‘good’ and the ‘bad’. On the good, SPEC is an attempt to satisfy a number of major stakeholders, who have a vested interest in a benchmark that is something they, and researchers generally, can optimized towards. The selection of benchmarks was not arbitrary, and the variety captures a lot of interesting and relevant facets of program execution. Industry still uses the benchmark (and not just for marketing!), as does a lot of unaffiliated research. As such, SPEC has also been well studied.
SPEC includes many real programs, run over extended periods of time. For example, 400.perlbench runs multiple real Perl programs, 401.bzip2 runs a very popular compression and decompression program, 403.gcc tests compilation speed with a very popular compiler, and 464.h264ref tests a video encoder. Despite being somewhat aged and a bit light, the performance characteristics are roughly consistent with the updated SPEC2017, so it is not generally valid to call the results irrelevant from age, which is a common criticism.
One major catch from SPEC is that official benchmarks often play shenanigans, as compilers have found ways, often very much targeted towards gaming the benchmark, to compile the programs in a way that makes execution significantly easier, at times even because of improperly written programs. 462.libquantum is a particularly broken benchmark. Fortunately, this behaviour can be controlled for, and it does not particularly endanger results from AnandTech, though one should be on the lookout for anomalous jumps in single benchmarks.
A more concerning catch, in this circumstance, is that some benchmarks are very specific, with most of their runtime in very small loops. The paper Performance Characterization of SPEC CPU2006 Integer Benchmarks on x86-64 Architecture (as one of many) goes over some of these in section IV. For example, most of the time in 456.hmmer is in one function, and 464.h264ref's hottest loop contains many repetitions of the same line. While, certainly, a lot of code contains hot loops, the performance characteristics of those loops is rarely precisely the same as for those in some of the SPEC 2006 benchmarks. A good benchmark should aim for general validity, not specific hotspots, which are liable to be overtuned.
SPEC2006 includes a lot of workloads that make more sense for supercomputers than personal computers, such as including lots of Fortran code and many simulation programs. Because of this, I largely ignore the SPEC floating point; there are users for whom it may be relevant, but not me, and probably not you. As another example, SPECfp2006 includes the old rendering program POV-Ray, which is no longer particularly relevant. The integer benchmarks are not immune to this overspecificity; 473.astar is a fairly dated program, IMO. Particularly unfortunate is that many of these workloads are now unrealistically small, and so can almost fit in some of the larger caches.
SPEC2017 makes the great decision to add Blender, as well as updating several other programs to more relevant modern variants. Again, the two benchmarks still roughly coincide with each other, so SPEC2006 should not be altogether dismissed, but SPEC2017 is certainly better.
Because SPEC benchmarks include disaggregated scores (as in, scores for individual sub-benchmarks), it is easy to check which scores are favourable. For SPEC2006, I am particularly favourable to 403.gcc, with some appreciation also for 400.perlbench. The M1 results are largely consistent across the board; 456.hmmer is the exception, but the commentary discusses that quirk.
(and the multicore metric)SPEC has a ‘multicore’ variant, which literally just runs many copies of the single-core test in parallel. How workloads scale to multiple cores is highly test-dependent, and depends a lot on locks, context switching, and cross-core communication, so SPEC's multi-core score should only be taken as a test of how much the chip throttles down in multicore workloads, rather than a true test of multicore performance. However, a test like this can still be useful for some datacentres, where every core is in fact running independently.
On the Good and Bad of GeekbenchGeekbench does some things debatably, some things fairly well, and some things awfully. Let's start with the bad.
To produce the aggregate scores (the final score at the end), Geekbench does a geometric mean of each of the two benchmark groups, integer and FP, and then does a weighted arithmetic mean of the crypto score with the integer and FP geometric means, with weights 0.05, 0.65, and 0.30. This is mathematical nonsense, and has some really bad ramifications, like hugely exaggerating the weight of the crypto benchmark.
Secondly, the crypto benchmark is garbage. I don't always agree with his rants, but Linus Torvald's rant is spot on here: https://www.realworldtech.com/forum/?threadid=196293&curpostid=196506. It matters that CPUs offer AES acceleration, but not whether it's X% faster than someone else's, and this benchmark ignores that Apple has dedicated hardware for IO, which handles crypto anyway. This benchmark is mostly useless, but can be weighted extremely high due to the score aggregation issue.
Consider the effect on these two benchmarks. They are not carefully chosen to be perfectly representative of their classes.
Unfortunately, many of the workloads beyond just AES are pretty questionable, as many are unnaturally simple. It's also hard to characterize what they do well; the SQLite benchmark could be really good, if it was following realistic usage patterns, but I don't think it is. Lots of workloads, like the ray tracing one, are good ideas, but the execution doesn't match what you'd expect of real programs that do that work.
Note that this is not a criticism of benchmark intensity or length. Geekbench makes a reasonable choice to only benchmark peak performance, by only running quick workloads, with gaps between each bench. This makes sense if you're interested in the performance of the chip, independent of cooling. This is likely why the fanless Macbook Air performs about the same as the 13" Macbook Pro with a fan. Peak performance is just a different measure, not more or less ‘correct’ than sustained.
On the good side, Geekbench contains some very sensible workloads, like LZMA compression, JPEG compression, HTML5 parsing, PDF rendering, and compilation with Clang. Because it's a benchmark over a good breadth of programs, many of which are realistic workloads, it tends to capture many of the underlying facets of performance in spite of its flaws. This means it correlates will with, eg., SPEC 2017, even though SPEC 2017 is a sustained benchmark including big ‘real world’ programs like Blender.
To make things even better, Geekbench is disaggregated, so you can get past the bad score aggregation and questionable benchmarks just by looking at the disaggregated scores. In the comparison before, if you scroll down you can see individual scores. M1 wins the majority, including Clang and Ray Tracing, but loses some others like LZMA and JPEG compression. This is what you'd expect given the M1 has the advantage of better speculation (eg. larger ROB) whereas the 5900X has a faster clock.
(and under Rosetta)We also have Geekbench scores under Rosetta. There, one needs to take a little more caution, because translation can sometimes behave worse on larger programs, due to certain inefficiencies, or better when certain APIs are used, or worse if the benchmark includes certain routines (like machine learning) that are hard to translate well. However, I imagine the impact is relatively small overall, given Rosetta uses ahead-of-time translation.
(and the multicore metric)Geekbench doesn't clarify this much, so I can't say much about this. I don't give it much attention.
(and the GPU compute tests)GPU benchmarks are hugely dependent on APIs and OSs, to a degree much larger than for CPUs. Geekbench's GPU scores don't have the mathematical error that the CPU benchmarks do, but that doesn't mean it's easy to compare them. This is especially true given there are only a very limited selection of GPUs with 1st party support on iOS.
None of the GPU benchmarks strike me as particularly good, in the way that benchmarking Clang is easily considered good. Generally, I don't think you should have much stock in Geekbench GPU.
On the Good and Bad of microarchitectural measuresAnandTech's article includes some of Andrei's traditional microarchitectural measures, as well as some new ones I helped introduce. Microarchitecture is a bit of an odd point here, in that if you understand how CPUs work well enough, then they can tell you quite a lot about how the CPU will perform, and in what circumstances it will do well. For example, Apple's large ROB but lower clock speed is good for programs with a lot of latent but hard to reach parallelism, but would fair less well on loops with a single critical path of back-to-back instructions. Andrei has also provided branch prediction numbers for the A12, and again this is useful and interesting for a rough idea.
However, naturally this cannot tell you performance specifics, and many things can prevent an architecture living up to its theoretical specifications. It is also difficult for non-experts to make good use of this information. The most clear-cut thing you can do with the information is to use it as a means of explanation and sanity-checking. It would be concerning if the M1 was performing well on benchmarks with a microarchitecture that did not suggest that level of general performance. However, at every turn the M1 does, so the performance numbers are more believable for knowing the workings of the core.
On the Good and Bad of CinebenchCinebench is a real-world workload, in that it's just the time it takes for a program in active use to render a realistic scene. In many ways, this makes the benchmark fairly strong. Cinebench is also sustained, and optimized well for using a huge number of cores.
However, recall what makes a benchmark good: to correlate reliably with actual or perceived performance. Offline CPU ray tracing (which is very different to the realtime GPU-based ray tracing you see in games) is an extremely important workload for many people doing 3D rendering on the CPU, but is otherwise a very unusual workload in many regards. It has a tight rendering loop with very particular memory requirements, and it is almost perfectly parallel, to a degree that many workloads are not.
This would still be fine, if not for one major downside: it's only one workload. SPEC2017 contains a Blender run, which is conceptually very similar to Cinebench, but it is not just a Blender run. Unless the work you do is actually offline, CPU based rendering, which for the M1 it probably isn't, Cinebench is not a great general-purpose benchmark.
(Note that at the time of the Twitter argument, we only had Cinebench results for the A12X.)
On the Good and Bad of GFXBenchGFXBench, as far as I can tell, makes very little sense as a benchmark nowadays. Like I said for Geekbench's GPU compute benchmarks, these sort of tests are hugely dependent on APIs and OSs, to a degree much larger than for CPUs. Again, none of the GPU benchmarks strike me as particularly good, and most tests look... not great. This is bad for a benchmark, because they are trying to represent the performance you will see in games, which are clearly optimized to a different degree.
This is doubly true when Apple GPUs use a significantly different GPU architecture, Tile Based Deferred Rendering, which must be optimized for separately. EDIT: It has been pointed out that as a mobile-first benchmark, GFXBench is already properly optimized for tiled architectures.
On the Good and Bad of browser benchmarksIf you look at older phone reviews, you can see runs of the A13 with browser benchmarks.
Browser benchmark performance is hugely dependent on the browser, and to an extent even the OS. Browser benchmarks in general suck pretty bad, in that they don't capture the main slowness of browser activity. The only thing you can realistically conclude from these browser benchmarks is that browser performance on the M1, when using Safari, will probably be fine. They tell you very little about whether the chip itself is good.
On the Good and Bad of random application benchmarksThe Affinity Photo beta comes with a new benchmark, which the M1 does exceptionally well in. We also have a particularly cryptic comment from Blackmagicdesign, about DaVinci Resolve, that the “combination of M1, Metal processing and DaVinci Resolve 17.1 offers up to 5 times better performance”.
Generally speaking, you should be very wary of these sorts of benchmarks. To an extent, these benchmarks are built for the M1, and the generalizability is almost impossible to verify. There's almost no guarantee that Affinity Photo is testing more than a small microbenchmark.
This is the same for, eg., Intel's ‘real-world’ application benchmarks. Although it is correct that people care a lot about the responsiveness of Microsoft Word and such, a benchmark that runs a specific subroutine in Word (such as conversion to PDF) can easily be cherry-picked, and is not actually a relevant measure of the slowness felt when using Word!
This is a case of what are seemingly ‘real world’ benchmarks being much less reliable than synthetic ones!
On the Good and Bad of first-party benchmarksOf course, then there are Apple's first-party benchmarks. This includes real applications (Final Cut Pro, Adobe Lightroom, Pixelmator Pro and Logic Pro) and various undisclosed benchmark suites (select industry-standard benchmarks, commercial applications, and open source applications).
I also measured Baldur's Gate 3 in a talk running at ~23-24 FPS at 1080 Ultra, at the segment starting 7:05. https://developer.apple.com/videos/play/tech-talks/10859
Generally speaking, companies don't just lie in benchmarks. I remember a similar response to NVIDIA's 30 series benchmarks. It turned out they didn't lie. They did, however, cherry-pick, specifically including benchmarks that most favoured the new cards. That's very likely the same here. Apple's numbers are very likely true and real, and what I measured from Baldur's Gate 3 will be too, but that's not to say other, relevant things won't be worse.
Again, recall what makes a benchmark good: to correlate reliably with actual or perceived performance. A biased benchmark might be both real-world and honest, but if it's also likely biased, it isn't a good benchmark.
On the Good and Bad of the Hardware Unboxed benchmark suiteThis isn't about Hardware Unboxed per se, but it did arise from a disagreement I had, so I don't feel it's unfair to illustrate with the issues in Hardware Unboxed's benchmarking. Consider their 3600 review.
Here are the benchmarks they gave for the 3600, excluding the gaming benchmarks which I take no issue with.
- Cinebench (MT+ST)
- V-Ray Benchmark (MT)
- Corona 1.3 Benchmark (MT)
- Blender Open Data (MT)
- WinRAR (MT)
- 7Zip File Manager (MT)
- 7Zip File Manager (MT)
- Adobe Premiere Pro video encode (MT)
To have a lineup like this and then complain about the synthetic benchmarks for M1 and the A14 betrays a total misunderstanding about what benchmarking is. There are a total of three real workloads here, one of which is single threaded. Further, that one single threaded workload is one you'll never realistically run single threaded. As discussed, offline CPU rendering is an atypical and hard to generalize workload. Compression and decompression are also very specific sorts of benchmarks, though more readily generalizable. Video encoding is nice, but this still makes for a very thin picking.
Thus, this lineup does not characterize any realistic single-threaded workloads, nor does it characterize multi-core workloads that aren't massively parallel.
Contrast this to SPEC2017, which is a ‘synthetic benchmark’ of the sort Hardware Unboxed was criticizing. SPEC2017 contains a rendering benchmark (526.blender) and a compression benchmark (557.xz), and a video encode benchmark (525.x264), but it also contains a suite of other benchmarks, chosen specifically so that all the benchmarks measure different aspects of the architecture. It includes workloads like Perl, GCC, workloads that stress different aspects of memory, plus extremely branchy searches (eg. a chess engine), image manipulation routines, etc. Geekbench is worse, but as mentioned before, it still correlates with SPEC2017, by virtue of being a general benchmark that captures most aspects of the microarchitecture.
So then, when SPEC2017 contains your workloads, but also more, and with more balance, how can one realistically dismiss it so easily? And if Geekbench correlates with SPEC2017, then how can you dismiss that, at least given disaggregated metrics?
In conclusionThe bias against ‘synthetic benchmarks’ is understandable, but misplaced. Any benchmark is synthetic, by nature of abstracting speed to a number, and any benchmark is real world, by being a workload you might actually run. What really matters is knowing how each workload is represents your use-case (I care a lot more about compilation, for example), and knowing the issues with each benchmark (eg. Geekbench's bad score aggregation).
Skepticism is healthy, but skepticism is not about rejecting evidence, it is about finding out the truth. The goal is not to have the benchmarks which get labelled the most Real World™, but about genuinely understanding the performance characteristics of these devices—especially if you're a CPU reviewer. If you're a reviewer who dismisses Geekbench, but you haven't read the Geekbench PDF characterizing the workload, or your explanation stops at ‘it's short’, or ‘it's synthetic’, you can do better. The topics I've discussed here are things I would consider foundational, if you want to characterize a CPU's performance. Stretch goals would be to actually read the literature on SPEC, for example, or doing performance counter-aided analysis of the benchmarks you run.
Normally I do a reread before publishing something like this to clean it up, but I can't be bothered right now, so I hope this is good enough. If I've made glaring mistakes (I might've, I haven't done a second pass), please do point them out.
Phone Buying Guide for Pokemon Go (2020)
tl;dr: The best value mid-range phones in 2020 are as follows. Retail prices noted in local currency.
- OnePlus Nord (Best value Android worldwide; £379, U.K.)
- Google Pixel 4a (Best value Android in U.S.; $349, U.S.)
- iPhone SE (Best value iOS; $399, U.S.)
IntroductionWith the 2020 holiday shopping season about to head into its peak and the big Go Beyond update coming to Pokemon Go, I thought it might be a good idea to share what I've learned from phone shopping this year. Additionally, the latest 0.193 update officially ends support for iOS 11 and Android 5, so there may be people looking for replacement phones.
This post was inspired by a couple of great submissions made in 2017 and 2018. I am not the original author of those posts, nor am I particularly well-versed on the latest tech, but I'll try my best here. Also, full disclosure: I have not played around with any of these devices, and everything discussed here comes from viewing phone specs, "professional" review articles/videos, and my own personal interpretations.
Here's how the post is organized:
- tl;dr (too long; didn't read) - up at the top
- Introduction - you're here
- Considerations for Playing Pokemon Go - I describe the main metrics I'll use to compare phones and give an example based on personal experience
- Phone Makers & Highlighted Models - The bulk of the post, where I talk about phone brands and point out notable models, usually a high-end flagship and a mid-tier option.
- Conclusions - closing thoughts
- Glossary - A few phone tech terms that might be helpful to readers
Considerations for Playing Pokemon GoBattery: For many players, this is probably the most important factor when playing the game. A good battery means you can get through a full Community Day without needing to bring a power bank with you. Capacity is given in units of milliamp-hours (mAh), and the average value for a modern phone is about 3000-4000 mAh. That mAh number isn't everything, though, as screen specs and power drawing from other hardware components can affect how long a full charge lasts.
RAM: A phone's random access memory (RAM) is the amount of local memory that a phone can quickly work with. This is what dictates how many active apps you can have open at once (i.e. when you need to reload an app when switching between Pokemon Go and Discord/Telegram/WhatsApp/Messengeetc.). I highly recommend a minimum of 4GB of RAM for an Android phone, as the operating system (OS) can sometimes take up to 1.5 GB on its own.
Processor: All phones use a system on a chip (SoC) that roughly determines how well it runs apps/processes/tasks. Most Android phones use Qualcomm Snapdragon SoCs, which are separated by overall performace tier (600 and lower for budget, 700 for mid-range, 800 for flagship), generation, and incremental upgrades. For playing Pokemon Go, you should really go for a Snapdragon 600 series at the very least.
Non-considerations: This post is about picking a phone to play Pokemon Go. Thus, I've left out regularly discussing phone aspects that don't matter as much for gameplay such as cameras, charging speeds, 5G compatibility, and extra special features, though I might have one-off mentions if they help define a model. I occasionally mention display refresh rates, but it's not comprehensive. If any of these features are important for you when picking a phone, be sure to do your own research before buying!
An Example: I've used a 2017 Motorola Z2 Play with 3000 mAh battery, 3GB RAM, and a Snapdragon 626 for the past 2.5 years playing heavily (current Trainer stats: 136 million XP, 163,000 Pokemon caught, no Go+). Over roughly the same time, my girlfriend has used a 2017 Google Pixel 2 with 2700 mAh battery, 4GB RAM, and a Snapdragon 835 chip to play. Before we got these phones, we had used an iPhone 5s and 5, respectively, to play since launch.
During pre-pandemic 3 hour Community Days, I could comfortably play the whole time on a single full phone charge while my girlfriend would need to plug in around the last hour; the increased battery capacity and lower-end chip meant that my Z2 Play would last longer than her Pixel 2. However, my Z2 Play (purchased for about $380) is still waiting around for a promised Android 9 update but now also has MAJOR RAM issues; so much so that Pokemon Go frequently crashes presumably due to the system running out of RAM. Meanwhile, her Pixel 2 (purchased for about $800) is still going strong on Android 10 and can easily last another year or two even with the last device software update coming in December 2020.
Feel free to keep this example and the previous considerations in mind as we go through each major phone brand alphabetically below.
Performance-wise, iOS offers probably the smoothest Pokemon Go experience. Thanks to Apple's vertical integration, all parts of an iPhone (OS, software, hardware) are optimized for one another, allowing iPhones to have great performance even when their raw numbers don't look so impressive. Thus, it's not always useful to compare iPhone spec numbers to those of Android phones. Apple also offers the longest software support period for their devices from any phone maker (around 5 years!), so you'll continue to get OS and security updates for quite a while.
One important note is that when you buy an Apple device, you're buying into the Apple ecosystem. They want to force you to only use their devices by making interfacing with Apple products easy and with other devices sometimes near impossible. Keep that in mind before you jump in!
This year's highlights include:
- iPhone 12 Pro Max ($1099) - This year's top of the line iPhone model. It has a larger battery (3687 mAh), bigger screen, and better camera features than any of the other iPhone 12 models. Though it only has 6GB of RAM, Apple is able to make that go a long way. Go for this if you want the latest and greatest iPhone device, but check out the other iPhone 12 models if you want to shave off a few hundred dollars for pretty much the same experience (mostly losing camera features).
- iPhone SE (2020) ($399) - If you want that smooth iOS Pokemon Go experience but don't want to break the bank, this is the choice for you. It's the cheapest iPhone Apple's ever released, yet has the same chip as last year's iPhone 11. The SE has a seemingly tiny 3GB of RAM, but again, Apple is able to stretch that out. The biggest downside, however, is that its battery capacity is low (1821 mAh - the smallest of all phones in this post), so you'll definitely have to bring a portable charger to make it through a full Community Day. Some reviewers go as far as to recommend the iPhone 11 as the "cheap iPhone option" instead because of the anemic battery on the SE.
Here's a model that really stuck out:
- ROG Phone 3 ($999) - A phone that's specifically designed for gaming. Like, competitive first-person shooter gaming. With a 6000 mAh battery (the biggest of all the phones in this post), minimum 12GB RAM, Snapdragon 865+, and a flurry of other features, this phone is overkill if your primary purpose is playing Pokemon Go. In the U.S., it appears to be mostly compatible with AT&T and T-Mobile networks, but not Verizon.
The best feature you get for the money is the camera. Pixel phones have a legendary camera for their price, which is nice, but not super relevant for Pokemon Go. Overall, however, they're solid products that run apps very well.
This year's highlights include:
- Pixel 5 ($699) - Very respectable specs (4000 mAh battery, 8GB RAM, Snapdragon 765G) for the price of an upper mid-range phone. They're not spectacular, however, as its chipset in particular isn't the higher-end Snapdragon 865. But most users probably won't notice the loss in sheer, raw power.
- Pixel 4a ($349) - One of the two phones topping the lists of best budget phone of 2020, the Pixel 4a is even cheaper than its main competitors. Its internals are very clearly a step down (3140 mAh, 6GB RAM, Snapdragon 730G) from the Pixel 5, but should still be more than enough for any Trainer, especially at this price point. Oh, and it has a headphone jack!
Today, HTC does still make phones, but they're typically hard to find. None are officially being sold in the U.S., so you'd have to go through third-party sellers. The phones they have are usually mid-range, but I haven't seen any favorable tech reviews, so I'll just move on.
Here's one that's worth pointing out:
- Legion Duel ($1049) - A very similar gaming phone to the Asus ROG Phone 3. The biggest trade-off is a smaller battery capacity (5000 mAh), though it charges faster. Definitely overkill for playing Pokemon Go, but if you have other mobile gaming aspirations and want a fast charging phone, then consider this one.
- V60 ThinQ/Dual Screen ($799 to $949) - Do you want (the option of using) two full-sized screens? If so, then this might be the phone for you. With a 5000 mAh battery, 8 GB RAM, and a Snapdragon 865 chip, it's got the specs to power everything you want to do on those dual screens. The price that you'll have to pay for this phone varies based on your network carrier in the U.S., so don't get too attached until you do your homework.
A relatively inconsequential but beloved feature of Motorola phones are the Moto actions. If you've ever used a Motorola phone, you'll know - chop your phone in midair to activate the flashlight, twist your phone along its long axis to open up the camera, etc. Small detail, but a lot of fun to play with.
Here are a couple of options from this maker:
- Moto Edge+ ($999) - Motorola's first flagship phone in a few years, and it's a Verizon exclusive. 5000 mAh battery, 12GB RAM, Snapdragon 865, and a 90 Hz refresh rate make for an impressive device. Some downsides, though: lots of bloatware, not great vibration motor, bad fingerprint reader, and a screen that curves over the edges.
- Moto G Power ($249) - If you're in the U.S. and need something cheaper than a mid-range phone, consider the Moto G Power. With 5000 mAh battery, 4GB RAM, and a Snapdragon 655, this should suffice for your PoGo needs. Its claim to fame is definitely that battery, as the RAM and SoC aren't too impressive. It's compatible with most major U.S. carriers and you should be able to grab it with some discount. Also, I wouldn't recommend any Android phones with lower RAM or chip specs than this if you want to play Pokemon Go on it for more than a year.
There are some notable potential obstacles in buying a OnePlus phone in the U.S., however. Some highly-praised models simply aren't released in the U.S., while others have support with only some networks (mostly T-Mobile, OnePlus's official U.S. partner). If you do manage to make it work, however, you'll soon find out exactly why tech reviewers absolutely love OnePlus.
This year's highlights include:
- OnePlus 8T ($749) - If you want all of the flagship features at a price that just barely puts it into the high-end category, the OnePlus 8T is the way to go. With 4500 mAh battery, 12GB RAM, Snapdragon 865, 120 Hz refresh rate, and warp charging, it's got great specs at an unbelievable price (to demonstrate, sneak a peek at the Samsung Note 20 Ultra!).
- OnePlus Nord (£379) - This is the phone that I wish I could have bought. It's the other phone (besides the Google Pixel 4a) that tops the lists of best budget phones of 2020. 4115 mAh battery, 8GB RAM, Snapdragon 765G. It has the specs of a Google Pixel 5 for hundreds less (£379 GBP = $505 USD vs. Pixel 5's $699 USD). You also get a 90 Hz refresh rate for smoother animations. The downside? It's not sold in the U.S., and it's missing support for a few frequency bands that U.S. carriers use for their networks, so it probably won't work super well even if you do import one (especially on Verizon). There are pared-down Nord variants coming to the U.S. in the near future, but it's just not the same.
- OnePlus 7T T-Mobile version ($349) - If you really wanted the Nord, are stuck in the U.S., but are a T-Mobile customer, you might be in luck! With 3800 mAh battery, 8GB RAM, and a Snapdragon 855+ (compared to the 765G on the Nord, the 855+ is last year's model, but for a higher class of phones), you get roughly the same specs as the Nord for an even lower price! But again, it's only for T-Mobile customers.
A strange quirk is that the internal SoC in some models differs based on which region you buy from. Samsung is able to make their own chips, the Exynos brand, for less but performance sometimes lags behind that of the more common Qualcomm Snapdragon.
This year's highlights include:
- Samsung Galaxy Note 20 Ultra ($1299) - Considered the best of the best, this is the phone that has all of the top features put together. 4500 mAh battery, 12GB RAM, Snapdragon 865+ (Americas, East Asia), 120 Hz refresh rate, and warp and wireless charging. This is the phone with that fancy pen stylus, whose latency has apparently been vastly improved. The battery life is supposedly boosted by the adaptive refresh rate of the phone, which will be lowered when apps don't need higher rates. But... just look at that price! I cri.
- Samsung Galaxy S20 FE ($699) - Priced as an upper mid-range phone, this one delivers one of the best values at this price point. A 4500 mAh battery, 6GB RAM, and a Snapdragon 865 chip makes this a very competitive phone. If you're able to get it on sale, that's even better! The main issue, however, is that even when discounted, it's still a bit expensive for people on a tight phone budget. But at least it's more widely available than similarly spec'd Xiaomi phones.
- Samsung Galaxy A51 ($399) - This is Samsung's most relevant competitor to the best value mid-range phones. It has a 4000 mAh battery, 4GB RAM, and an Exynos 9611 chip, which is comparable in specs, though you may want to consider getting more RAM. Overall, it's a good phone, but it doesn't outshine its mid-range competitors like the Pixel 4a or Nord by offering a better price or better specs.
Here's probably their the best model this year:
- Xperia 5 ii ($949) - Sony's Xperia series has a few tricks up their sleeves. On paper, the phone looks alright: 4000 mAh battery, 8GB RAM, Snapdragon 865 chip, 120 Hz refresh rate. But the first thing people will notice is that this phone has a wonky aspect ratio - it's much, much taller than it is wide. It does this to be a more comfortable one-handed device, as well as achieving a more cinematic aspect ratio in landscape mode. You also get manual-level camera controls directly imported from Sony's professional camera lineup. Oh, and you get a 3.5 mm headphone jack, which is nearly impossible to find on phones at this price point.
Nevertheless, here are a few models that really stand out:
- Xiaomi Mi 10 ($649) - On paper, you get a lot more than you'd expect from this phone based on its price. A 4780 mAh battery, 8GB RAM, and a Snapdragon 865 gives it flagship-level specs at a upper mid-range price. However, you might have trouble finding a place to buy it, and it might not work with certain networks (definitely not on Verizon in the U.S.). You may also want to consider the Xiaomi Poco F2, which only has 6GB RAM, but can be found for even cheaper in certain markets.
- Black Shark 3 Pro ($899) - Xiaomi's gaming phone entry that has a very unique look. It has physical shoulder buttons that pop up when gaming in landscape orientation and liquid cooling(!). 5000 mAh battery, Snapdragon 865, 12GB of RAM, and a 90 Hz refresh rate all give it top-notch specs. But again, good luck getting it to work if you're in the U.S.
- Poco X3 (£199) - On paper, this phone's stats do not match its asking price. 5160 mAh battery, 6GB RAM, Snapdragon 732G, and 120 Hz refresh rate all starting at £199 GBP/€229 EU$250 USD. It's the best ultra affordable phone in 2020, but has some caveats. It's not fully compatible with U.S. networks and has ads built right into the OS (though I hear there is a way to disable this). If you can deal with these, then check this bad boy out.
ConclusionsThere's a myriad of options when buying a phone these days. Here, I've gone over some notable current examples from many of the major phone brands globally, but this is by no means a comprehensive treatment. In fact, this post is heavily biased towards the U.S. (this is Reddit, after all).
In my opinion, mid-range phones offer the best value for your money when buying a brand new phone. Budget options (under $200) really aren't suited for playing Pokemon Go for an appreciable amount of time, and you'll get more frequent slowdowns and crashes with each new game update. On the flip side, the specs on flagship phones are overkill for a game like PoGo and the huge price tags are daunting. You could always buy a flagship from a year or two ago for a deep discount (in fact, this is what many people recommend instead of buying a new, mid-range phone model), but you'll get that many years fewer in guaranteed OS and security updates.
Doing the research for this post has given me some interesting insight into phones, brands, and networks. There's often some amazing choices out there that you've never heard about simply because it's not marketed in your country. Also, Verizon may be America's #1 network, but it absolutely sucks if you like a phone that isn't made by an American, European, or South Korean company.
Lastly, to revisit my earlier example, I ended up purchasing a Google Pixel 4a for myself a few days ago to replace my Moto Z2 Play. I'm really looking forward to it and absolutely cannot wait to go catching and grind towards level 50 on my new device!
I hope this post has been helpful!
- Budget/Mid-range/Flagship - These are the main smartphone categories that describe price and performance. Roughly speaking, a budget phone is anything under $300 USD, a mid-range is between $300 and $700, and a flagship (or high-end) is anything over $700. Flagship phones are the devices that makers cram all of the best features into.
- Display Refresh Rate - The rate at which the screen updates - think frames-per-second. A higher number means a seemingly smoother animation. Most people won't notice a difference, but tech enthusiasts love higher rates. 60 Hz is typical, 90 Hz is the next step up, and some go as high as 120 Hz or 144 Hz.
- SoC (System on a chip) - The primary processor that ties many of the components (CPU, graphics, GPS) of a smartphone together. Most Android phones use a Snapdragon SoC, while a few makers like Samsung, Huawei, and Apple create their own. There are different tiers and generations, and is often the primary indication for the performance of a phone.
- RAM (random access memory) - The local memory that a phone can quickly work with. This is what dictates how many active apps you can have open at once (i.e. what determines whether you need to reload apps when switching between Pokemon Go and Discord/Telegram/WhatsApp/Messengeetc.).
- milliamp-hour - Typical units for battery capacity. Literally, it represents how much current can be drawn from the battery per hour and technically simplifies to units of charge (Amp = Coulombs/Sec
- Trade-in - A business practice where you basically sell a retailer your old phone in order to get a discount on their product. Your trade-in doesn't need to be from the same maker, they'll take it regardless. The amount of cash you'll get in return is directly tied to the model and condition of your old phone; most old phones won't award any money, but it's more environmentally friendly than throwing it in the trash.
- 5G - The new, 5th generation standard in cellular data networking. It's an upgrade to 4G LTE, promising faster speeds, but the infrastructure isn't quite there yet. Many phone brands are coming out with 5G-compatible phones, but it's not really worth it yet, as 5G is incredibly short-range and there aren't enough receivers even in major cities.
P.S. For some reason, I'm not able to find/select the [Gear] flair for this post. Halp.