Seeking Alpha

tmyklebu

tmyklebu
Send Message
View as an RSS Feed
View tmyklebu's Comments BY TICKER:
Latest  |  Highest rated
  • The Net Neutrality Monster Has A Close Relative [View article]
    dwdallam,
    The Internet comprises a bunch of independent networks. There are capacity limitations all over the place within and between these networks. Some pairs of them have decided to interconnect ("peer") for one reason or another. Whether payments are exchanged and how big those payments are depends on the networks involved; currently, it's a matter for the two parties to work out between themselves, considering the particular capacity constraints they're hitting and the nature of the new traffic.

    "Net neutrality" unfortunately has lots of different and conflicting definitions. The variants of "net neutrality" I'm more familiar with, aimed at barring ISPs from doing deep packet inspection and DNS hijacking and RST packet injection and so forth, are materially different from what you're advancing---they only ask that ISPs not impersonate Internet hosts and, broadly, treat all packets in their networks "the same." They don't impose restrictions on who ISPs can peer with or how.

    The variant of "net neutrality" you're pushing here, that in addition parties to different peering agreements should always pay "the same amount" (for the privilege of peering, or per gigabyte out, or per net gigabyte out, or whatever) is a completely different beast. It's viscerally appealing to think of Internet access as a commodity, but it isn't workable on the scale of services like Netflix.

    I'd have the same "censorship" concern as you if Comcast was specifically throttling Netflix traffic---to me, this would be a net neutrality issue. This contrasts with Cogent or Comcast simply not having a fat enough pipe to handle all of the Netflix traffic and thus causing packet loss---this is a capacity issue, not a net neutrality issue.

    (Comcast used to mess around with BitTorrent or something, preferentially dropping packets based on something in the IP payload. This was a net neutrality issue. But I'm not sure they're doing something similar here.)

    What happened here just seems to be Netflix and Comcast agreeing to peer. Netflix will now send traffic directly to Comcast instead of asking Cogent to send the traffic on to Comcast. They're probably paying a little bit more to do so, but they're working around whatever capacity issue was slowing Netflix-Comcast traffic down beforehand. This is totally normal; Netflix wasn't getting the service it wanted from Cogent, so now it's peering with Comcast so it can send traffic to Comcast directly. So, unless we learn that Comcast was actually doing something sinister, all the net neutrality moaning that accompanied this run-of-the-mill peering agreement seems completely misplaced.
    Feb 25 03:52 PM | 1 Like Like |Link to Comment
  • The Net Neutrality Monster Has A Close Relative [View article]
    Yeah, I've got to tip my hat to you on this one. Nice call.

    Amid all the moaning about the "death of net neutrality" in the news, all I'm getting is that Netflix and Comcast entered into a peering agreement.
    Feb 25 12:44 PM | Likes Like |Link to Comment
  • The Net Neutrality Monster Has A Close Relative [View article]
    There seem to be problems between Cogent and just about everyone. I'm told they're both considerably cheaper than everyone else and considerably more difficult to deal with than anyone else. Googling for "cogent telia depeering" and "cogent sprint depeering" and "cogent level3 depeering" tells me that this isn't the first major row they've had.

    I misunderstood your article to say "the sky is falling!" rather than "Cogent's links to several major retail ISPs look to be becoming increasingly saturated" or such. For that I apologise. But the major regional ISPs in the chart (Cox, Time Warner, Cablevision, etc.) don't suffer from the same problem; in fact, speeds seem to be *rising* for several of them. So I still think you're trying to find a conclusion that isn't supported here.
    Feb 12 06:26 PM | 1 Like Like |Link to Comment
  • The Net Neutrality Monster Has A Close Relative [View article]
    When I visit the USA ISP Speed Index page you link to, there are a whole bunch of ISPs whose speed index is going up to. I see that you have them disabled in your chart.

    It seems that the effect you're seeing here is caused by your selection of only the data that advances the effect you're trying to see.
    Feb 12 04:48 PM | 1 Like Like |Link to Comment
  • AMD: A Preview Of The Kaveri Release [View article]
    (1) The slide talks about binary trees, not B-trees or B*-trees or any other B-tree variant.

    (2) The slide is not talking about insertion or traversal. The slide is talking about lookups. In binary trees.
    Jan 7 10:53 PM | Likes Like |Link to Comment
  • AMD: A Preview Of The Kaveri Release [View article]
    The slides are preliminary. Perhaps someone goofed. Perhaps the benchmark isn't what it says it is. Perhaps, for example, there really is an unholy amount of loop overhead on their CPU benchmarks for some reason.

    B-trees still aren't the same thing as binary trees. B-trees are B-ary. B is usually chosen to be considerably larger than two. Binary trees are binary. The time taken by the search should be dominated by the nodes not in cache (even though there are fewer), not the nodes in cache (even though there are more).

    I really don't think you're reading or understanding my posts, so I'm going to leave this discussion be.
    Jan 7 10:54 AM | 1 Like Like |Link to Comment
  • AMD: A Preview Of The Kaveri Release [View article]
    Huh? Yes, you're looking up random records a bunch of times. Hopefully different random records each time. The slide says it's a binary tree, not a B-tree. They aren't the same thing.

    Your argument that performance increases slightly because each processor has its own cache is...specious to say the very least. First, CPUs have the same amount of cache whether the binary tree you stored over in RAM has 1M, 5M, 10M, or 25M keys. Second, no 4-core processor I know of has even 250MB of cache, which is what it would take to keep half of the biggest tree in cache and keep the expected number of memory hits per query down to 1.

    The only other timing difference we should see is a reduction in per-node-visit loop overhead. If that's what was going on, we would see the same trend, or a more exaggerated in the 1-core case, too. (Or they're doing something very silly once per query like bouncing a cache line around among cores.)
    Jan 7 04:26 AM | 1 Like Like |Link to Comment
  • AMD: A Preview Of The Kaveri Release [View article]
    Most large databases are I/O-bound for the simple reason that most of the accesses they're doing have to hit disk, even saying nothing about durability guarantees.

    I don't think binary tree searches (one random access per node; log_2(n) nodes) are similar enough to B-tree searches (one random access and B sequential accesses per node over log_B(n) nodes---usually a considerable constant factor fewer nodes hit per access) for you to make the extrapolation you're trying to make. Walking through the B-tree does a better job of hiding the cache miss and you just do so many fewer steps through the tree. The slam-the-bus solution I speculated they were doing below probably won't work out so hot for B-trees.
    Jan 7 04:08 AM | 1 Like Like |Link to Comment
  • AMD: A Preview Of The Kaveri Release [View article]
    The binary tree slide looks bogus to me. There's no way performance should *increase* as you make the tree bigger in the 4-core test. It should get substantially, measurably worse, both for the 1-core and the 4-core cases. (No idea what they're doing on the APU; maybe it's massively parallel and bound by bus throughput rather than memory latency.)

    My fuzzy math: 1M nodes in a tree is about 20 levels of tree. 25M nodes is about 25 levels of tree. Each node is, say, 20 bytes (16 bytes of left and right pointers and 4 bytes of data). So you can fit about 17 levels of tree in a 4MB cache and the first 16 will spend the majority of their time in cache. The remaining 3-8 levels will almost always result in accesses to main memory which are about an order of magnitude more expensive. So the 25M case on the CPUs should be about twice as slow as the 1M case on the CPUs. But they show it getting faster as the tree size goes up.
    Jan 7 03:48 AM | Likes Like |Link to Comment
  • Dataram Corporation: A $2.4 Stock Poised To Climb Higher [View article]
    You're being silly. Modern NAND flash has such high latency that using it in place of DRAM would ruin performance of anything that hits memory with any regularity. (And that's pretty much everything.)
    Dec 5 12:28 PM | Likes Like |Link to Comment
  • Dataram Corporation: A $2.4 Stock Poised To Climb Higher [View article]
    Hold on. You're saying that Crossbar's memory product is going to render *DRAM* obsolete? Have you perchance confused DRAM with flash memory? Crossbar's own website points out that it's several times slower than most modern DRAM.
    Dec 4 09:43 PM | 1 Like Like |Link to Comment
  • Intel Vindicated, Very Competitive With Apple's A7 [View article]
    Hang on. Why are we putting any stock into CPU benchmarks that haven't been written carefully? It seems like we might be comparing bad code on one platform with bad code on another platform here. Switching MSVC out for icc would mean we're comparing mediocre code on one platform with (presumably) bad code on another.
    Nov 19 12:04 PM | Likes Like |Link to Comment
  • Why Brent Futures And Stock Index Futures Are Easy To Manipulate [View article]
    @American: It would be tough to do that without knowing a case where the postulated manipulation probably happened and which side it was on. But you could look at trades that happen at the opening cross and opening cross order imbalance data on the relevant Friday, for instance.
    Nov 9 09:59 AM | Likes Like |Link to Comment
  • Why Brent Futures And Stock Index Futures Are Easy To Manipulate [View article]
    Yes, Paulo, but someone has the other side of that collusive group's trade at expiry. If the group is long the futures, the other side is short the futures. If the other side isn't trying to make a huge short bet, the other side, not the manipulators, will cover its position in the opening cross on expiration Friday, and that will materially affect the settlement price of the futures.

    I understand the difference between a cash-settled future and a physically-settled future. The arbitrage between a physically-settled future and the underlying is a simple long future, short underlying---you get to watch the short underlying position disappear upon delivery. The arbitrage between a cash-settled future and the underlying is different, however. If you are long the future, you want to be short the underlying AND you exit your position in the underlying when the settlement price of the future is determined, preferably at the settlement price.

    This is a massive difference, too, and it would seem to exactly cancel out the massive difference you're talking about. (Again, I'm ignoring the Brent futures here, since I have no idea how the EFP market for Brent crude works.)

    There is an "opening cross" on most(?) US equity markets, where a bunch of orders are collected and a single match is run to determine "the" opening price. I think they collect any limit orders that are still on the book at the open and any orders specifically marked "use me in the opening cross" and do the match; the opening price of each symbol is a market-clearing price for the bid and offer books compiled for the opening cross.

    Now, maybe the manipulators don't need to do anything at S&P future expiry. But the hypothetical arbitrageur on the other side needs to enter some really fat orders into the opening cross on expiration Friday so that he closes out his short position in the underlying at the settlement price of the future. When the cross happens, the really fat sell orders will probably drive down the opening prices of all involved symbols considerably more than the buying pressure over the previous minutes or hours could drive them up. It doesn't seem like this ends well for the manipulator.
    Nov 8 07:34 PM | Likes Like |Link to Comment
  • Why Brent Futures And Stock Index Futures Are Easy To Manipulate [View article]
    @Paulo: I don't see how that aspect is unique to the S&P futures market. Arbitrageurs who are actually in an arbitrage position won't get creamed. But "arbitrageurs" who need to rely on a greater fool, rather than a structural feature of the products they arbitrage, to let them exit their positions without losing money sure will.

    I also don't think your article laid out a strategy where a deep-pocketed market manipulator can manipulate anything to do with the S&P and turn a profit. (I don't know anything about the settlement process or the EFP market for Brent crude, though. Maybe it's screwy. I'm not trying to comment on that.)

    @PSalerno: Sure. Arbitrageurs aren't forced to carry out the arbitrage. If they *couldn't* carry out the arbitrage, though, they wouldn't, y'know, be doing arbitrage. What keeps them from getting totally hosed is that they *can* carry out the arbitrage if needed. In this case, that means selling S&P components in the opening cross, not praying they can unload the futures on someone else, who's then probably in exactly the same position.
    Nov 8 06:22 PM | Likes Like |Link to Comment
COMMENTS STATS
29 Comments
17 Likes