

Some charging pads also prop up the phone at an angle, making it easy to read the screen while also not having to hold the phone up. Most phones have their charging port on the bottom, so a phone stand couldn’t be used while charging with a cord.
Some charging pads also prop up the phone at an angle, making it easy to read the screen while also not having to hold the phone up. Most phones have their charging port on the bottom, so a phone stand couldn’t be used while charging with a cord.
This 100%. The other comments addressed the “should I withdraw?” aspect of OP’s question, but this comment deals with “should I stop contributing?”. The answer to the latter is: no.
The mantra in investing has always been “buy low, sell high”. If the stock market is down, continuing your 401k contributions is doing the “buy low” part.
I can understand the pessimism in some of the answers given so far, especially with regards to the poor state of American public transit. But ending a discussion with “they guess” is unsatisfactory to me, and doesn’t get to the meat of the question, which I understand to be: what processes might be used to identify candidate bus stop locations.
And while it does often look like stops are placed by throwing darts at a map, there’s at least some order and method to it. So that’s what I’ll try to describe, at least from the perspective of a random citizen in California that has attended open houses for my town’s recently-revamped bus network.
In a lot of ways, planning bus networks is akin to engineering problems, in that there’s almost never a “clean slate” to start with. It’s not like Cities Skylines where the town/city is built out by a single person, and even master planned developments can’t predict what human traffic patterns will be in two or three decades. Instead, planning is done with regards to: what infrastructure already exists, where people already go, and what needs aren’t presently being met by transit.
Those are the big-picture factors, so we’ll start with existing infrastructure. Infra is expensive and hard to retrofit. We’re talking about the vehicle fleet, dedicated bus lanes, bus bulbs or curb extensions, overhead wires for trolleybuses, bus shelters, full-on BRT stops, and even the sidewalk leading up to a bus stop. If all these things need to be built out for a bus network, then that gets expensive. Instead, municipalities with some modicum of foresight will attach provisos to adjacent developments so that these things can be built at the same time in anticipation, or at least reserve the land or right-of-way for future construction. For this reason, many suburbs in the western USA will have a bulb-out for a bus to stop, even if there are no buses yet.
A bus network will try to utilize these pieces of infrastructure when they make sense. Sometimes they don’t make total sense, but the alternative of building it right-sized could be an outlandish expense. For example, many towns have a central bus depot in the middle of downtown. But if suburban sprawl means that the “center of population” has moved to somewhere else, then perhaps a second bus depot elsewhere is warranted to make bus-to-bus connections. But two depots cost more to operate than one, and that money could be used to run more frequent buses instead, if they already have those vehicles and drivers. Tradeoffs, tradeoffs.
Also to consider are that buses tend to run on existing streets and roads. That alone will constrain which way the bus routes can operate, especially if there are one-way streets involved. In this case, circular loops can make sense, although patrons would need to know that they’ll depart at one stop and return at another. Sometimes bus-only routes and bridges are built, ideally crossing orthogonal to the street grid to gain an edge over automobile traffic. In the worst case, buses get caught up in the same traffic as all the other automobiles, which sadly is the norm in America.
I can only briefly speak of the inter-stop spacing, but it’s broadly a function of the service frequency desired, end-to-end speed, and how distributed the riders are. A commuter bus from a suburb into the core city might have lots of stops in the suburb and in the city, but zero stops in between, since the goal is to pick people up around the suburb and take them somewhere into town. For a local bus in town, the goal is to be faster than walking, so with 15 minute frequencies, stops have to be no closer than 400-800 meters or so, or else people will just walk. But too far and it’s a challenge for wheelchair users who need the bus. Whereas for a bus service which is purely meant to connect between two bus depots, it would prefer to make a few more stops in between that make sense, like a mall, but maybe not if it can travel exclusively on a freeway or in dedicated bus lanes. So many things to consider.
As for existing human traffic patterns, the new innovation in the past decade or so has been to look at anonymized phone location data. Now, I’m glossing over the privacy concern of using people’s coarse location data, but the large mobile carriers in the USA have always had this info, and this is a scenario where surveying people about which places they commute or travel to is imprecise, so using data collected in the background is fairly reliable. What this should hopefully show is where the “traffic centers” are (eg malls, regional parks, major employers, transit stations), how people are currently getting there (identifying travel mode based on speed, route, and time of day), and the intensity of such travel in relationship to everyone else (eg morning/evening rush hour, game days).
I mentioned surveys earlier, which while imprecise for all the places that people go to, it’s quite helpful for identifying the existing hurdles that current riders face. This is the third factor, identifying unmet needs. As in, difficulties with paying the fare, transfers that are too tight, or confusing bus depot layouts. But asking existing riders will not yield a recipe for growing ridership with new riders, people who won’t even consider riding the existing service, if one exists at all. Then there’s the matter of planning for ridership in the future, as a form of induced demand: a housing development that is built adjacent to an active bus line is more likely to create habitual riders from day 1.
As an aside, here in California, transit operators are obliged to undergo regular analysis of how the service can be improved, using a procedure called Unmet Transit Needs. The reason for this procedure is that some state funds are earmarked for transit only, while others are marked for transit first and if no unmet needs exist, then those funds can be applied to general transport needs, often funding road maintenance.
This process is, IMO, horrifically abused to funnel more money towards road maintenance, because the bar for what constitutes an Unmet Transit Need includes a proviso that if the need is too financially burdensome to meet, they can just not do it. That’s about as wishy-washy as it gets, and that’s before we consider the other provisio that requires an unmet need to also satisfy an expectation of a certain minimum ridership… which is near impossible to predict in advance for a new bus route or service. As a result, transit operators – under pressure by road engineers to spend less – can basically select whichever outside consultant will give them the “this unmet transit need is unreasonable” stamp of disapproval that they want. /rant
But I digress. A sensible bus route moves lots of people from places they’re already at to places they want to go, ideally directly or maybe through a connection. The service needs to be reliable even if the road isn’t, quick when it can be, and priced correctly to keep the lights on but maybe reduced to spur new ridership. To then build out a network of interlinking bus routes is even harder, as the network effect means people have more choices on where to go, but this adds pressure on wayfinding and fare structures. And even more involved is interconnecting a bus network to a train/tram/LRT system or an adjacent town’s bus network.
When they’re doing their job properly, bus routing is not at all trivial for planners, and that’s before citizens are writing in with their complaints and conservatives keep trying to cut funding.
have bandwidth that is some % of carrier frequency,
In my limited ham radio experience, I’ve not seen any antennas nor amplifiers which specify their bandwidth as a percentage of “carrier frequency”, and I think that term wouldn’t make any sense for antennas and (analog) amplifiers, since the carrier is a property of the modulation; an antenna doesn’t care about modulation, which is why “HDTV antennas” circa 2000s in the USA were merely a marketing term.
The only antennas and amplifiers I’ve seen have given their bandwidth as fixed ranges, often accompanied with a plot of the varying gain/output across that range.
going up in frequency makes bandwidth bigger
Yes, but also no. If a 200 kHz FM commercial radio station’s signal were shifted from its customary 88-108 MHz band up to the Terahertz range of the electromagnetic spectrum (where infrared and visible light are), the bandwidth would still remain 200 kHz. Indeed, this shifting is actually done, albeit for cable television, where those signals are modulated onto fibre optic cables.
What is definitely true is that way up in the electromagnetic spectrum, there is simply more Hertz to utilize. If we include all radio/microwave bands, that would be the approximate frequencies from 30 kHz to 300 GHz. So basically 300 GHz of bandwidth. But for C band fibre optic cable, their usable band is from 1530-1565 nm, which would translate to 191-195 THz, with 4 THz of bandwidth. That’s over eight times larger! So much room for activities!
For less industrial use-cases, we can look to 60 GHz technology, which is used for so-called “Wireless HDMI” devices, because the 7 GHz bandwidth of the 60 GHz band enables huge data rates.
To actually compare the modulation of different technologies irrespective of their radio band, we often look to special efficiency, which is how much data (bits/sec) can be sent over a given bandwidth (in Hz). Higher bits/sec/Hz means more efficient use of the radio waves, up to the Shannon-Hartley theoretical limits.
getting higher % of bandwidth requires more sophisticated, more expensive, heavier designs
Again, yes but also no. If a receiver need only receive a narrow band, then the most straightforward design is to shift the operating frequency down to something more manageable. This is the basis of superheterodyne FM radio receivers, from the era when a few MHz were considered to be very fast waves.
We can and do have examples of this design for higher microwave frequency operation, such as shifting broadcast satellite signals down to normal television bands, suitable for reusing conventional TV coax, which can only carry signals in the 0-2 GHz band at best.
The real challenge is when a massive chunk of bandwidth is of interest, then careful analog design is required. Well, maybe only for precision work. Software defined radio (SDR) is one realm that needs the analog firehose, since “tuning” into a specific band or transmission is done later in software. A cheap RTL-SDR can view a 2.4 MHz slice of bandwidth, which is suitable for plenty of things except broadcast TV, which needs 5-6 MHz.
LoRa is much slower, caused by narrowed bandwidth but also because it’s more noise-resistant
I feel like this states the cause-and-effect in the wrong order. The designers of LoRa knew they wanted a narrow-band, low-symbol rate air interface, in order to be long range, and thus were prepared to trade away a faster throughput to achieve that objective. I won’t say that slowness is a “feature” of LoRa, but given the same objectives and the limitations that this universe imposes, no one has produced a competitor with blisteringly fast data rate. So slowness is simply expected under these circumstances; it’s not a “bug” that can be fixed.
In the final edit of my original comment, I added this:
Radio engineering, like all other disciplines of engineering, centers upon balancing competing requirements and limitations in elegant ways. Radio range is the product of intensely optimizing all factors for the desired objective.
Also, what if things that require very little data transmission used something lower than 2.4Ghz for longer range? (1Ghz or something?)
No one seemed to touch upon this part, so I’ll chime in. The range and throughput of a transmission depends on a lot of factors, but the most prominent are: peak and avg output power, modulation (the pattern of radio waves sent) and frequency, background noise, and bandwidth (in Hz; how much spectrum width the transmission will occupy), in no particular order.
If all else were equal, changing the frequency to a lower band wouldn’t impact range or throughput. But that’s hardly ever the case, since reducing the frequency imposes limitations to the usable modulations, which means trying to send the same payload either takes longer or uses more spectral bandwidth. Those two approaches have the side-effect that slower transmissions are more easily recovered from farther away, and using more bandwidth means partial interference from noise has a lesser impact, as well as lower risk of interception. So in practice, a lower frequency could improve range, but the other factors would have to take up the slack to keep the same throughput.
Indeed, actual radio systems manipulate some or all of those factors when longer distance reception is the goal. Some systems are clever with their modulation, such as FT8 used by amateur radio operators, in order to use low-power transmitters in noisy radio bands. On the flip side, sometimes raw power can overcome all obstacles. Or maybe just send very infrequent, impeccably narrow messages, using an atomic clock for frequency accuracy.
To answer the question concretely though, there are LoRa devices which prefer to use the ISM band centered on 915 MHz in The Americas, as the objective is indeed long range (a few hundred km) and small payload (maybe <100 Bytes), and that means the comparatively wider (and noisier) 2.4 GHz band is unneeded and unwanted. But this is just one example, and LoRa has many implementations that change the base parameters. Like how MeshCore and Meshtastic might use the same physical radios but the former implements actual mesh routing, while the latter floods to all nodes (a bad thing).
But some systems like WiFi or GSM can be tuned for longer range while still using their customary frequencies, by turning those other aforementioned knobs. Custom networks could indeed be dedicated to only sending very small amounts of data, like for telemetry (see SCADA). That said, GSM does have a hard cap of 35 km, for reasons having to do with how it handles multiple devices at once.
Radio engineering, like all other disciplines of engineering, centers upon balancing competing requirements and limitations in elegant ways. Radio range is the product of intensely optimizing all factors for the desired objective.
I habitually remove the automatic +1, so I won’t feel self-aggrandizing haha
It’s for this reason that I sometimes spell out the units as: 1000 GBytes/sec or 1000 Gbits/sec. In my book, Byte is always “big B” and bit is always “little b”, and then spelling it out makes it unambiguous in writing.
There are, but the process may be truly arcane – 1993 for the .us process found in RFC 1480 – but people have done it: https://web.archive.org/web/20160316224838/https://owen.sj.ca.us/~rk/howto/articles/usdomain/usdomain.html
I’m also old, but I understand people do watch portrait videos. Sometimes a lot of them, in a single sitting. There’s a popular social media app which exclusively has short-form portrait videos.