_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
URI Visit Hacker News on the Web
COMMENT PAGE FOR:
URI How the U.S. National Science Foundation enabled Software-Defined Networking
coreyzzp wrote 5 hours 59 min ago:
Whatâs the current state of SDN development these days?
I remember working on related projects about ten years ago in grad
school, and even back then it felt like a somewhat naive and overhyped
form of âengineering innovation.â
Take OpenFlow, for example â every TCP connection had to go through
the controller to set up a per-connection flow match entry for the
path. It always struck me as a bit absurd.
At the time, the main push came from Stanfordâs âclean slateâ
networking project led by Prof. Nick McKeown. They spun off things like
Open vSwitch, Big Switch Networks, and even open-source router efforts
like NetFPGA. Later, the professor went back into industry.
Looking back, the whole movement feels like a startup-driven experiment
that got heavily âpackagedâ but never really solved any fundamental
problem. I mean, traditional distributed-routing-based network gear was
already working fine â didnât they already have admin interfaces
for configuration anyway (or call that admin interface SDN )? lol ~
tguvot wrote 3 hours 45 min ago:
i was in close relations with telecoms during that timeframe. they
went bananas with it because all of it was new for them. so they
abused and misused it.
one of them for example used opendailight not for it's openflow
capabilities, but via some heavily customized plugin and kind of
orchestrator for automation via some crazy yang models that were sent
to execution to downstream orchestrator.
but from their perspective and perspective of the management they
were doing SDN
traditional network gear had "element controllers". some of the got
rebranded into "SDN*something" and got interface liftups
ps. sdn/openflow like you describe were absolutely out of question
for deployment in production networks. they could talk about all the
benefits of it, but nobody dared to do anything with it and arguably,
they had no real need
BobbyTables2 wrote 4 hours 42 min ago:
To me server/networking hardware companies have a wet dream of
manipulating workloads on physical servers the way one manipulates
VMs in cloud computing.
Except the dream is to not do it just within a blade enclosure, but
across blades in multiple racks, with network based storage in a
multi-tennant environment. Maybe even across datacenters.
At some point, dealing (in an automated manner) with discovery,
abstraction, and routing across different networking topologies,
blade enclosures, rack switches, etc. becomes insane.
Of course a sysadmin with a few shell scripts could practically do
the same for meaningful use cases without the general solutionâs
decade-long engineering effort and vendor lock-inâ¦
wmf wrote 5 hours 13 min ago:
A lot of mistakes were made. Almost all the code has been thrown away
and all the details are different but maybe some of the ideas
influenced things that exist today.
justahuman74 wrote 5 hours 14 min ago:
SDN is great if you're trying to build something like a multi-tenant
cloud on top of another network of machines. Your DPUs can handle all
the overlay logic as if there was a top of rack switch in each
chassis
xxpor wrote 5 hours 41 min ago:
It's all at the big cloud service providers. Not as much focused on
the physical network (as originally imagined), but in the overlay
networks. Seethe various DPUs like Intel IPU, Nvidia/Mellanox
Bluefield, etc. Nvidia DOCA even uses OvS as the sort of example out
of the box software to implement networking on Bluefield. When your
controller is Arm cores 5 cm away on the same PCB doing per
connection setup is no longer as absurd ;)
themafia wrote 6 hours 27 min ago:
> 2003: The goal of the 100Ã100 project was to create communication
architectures that could provide 100Mb/s networking for all 100 million
American homes.
Well you failed horribly.
> The project brought together researchers from Carnegie Mellon,
Stanford, Berkeley, and AT&T.
I think I see why.
> This research led to the 4D architecture for logically centralized
network control of a distributed data plane
What? How was this meant to benefit citizens?
> Datacenter owners grew frustrated with the cost and complexity of the
commercially available networking equipment; a typical datacenter
switch cost more than $20,000 and a hyperscaler needed about 10,000
switches per site. They decided they could build their own switch box
for about $2,000 using off-the-shelf switching chips from companies
such as Broadcom and Marvell
What role did the NSF play here? It sounds like basic economics did
most of the actual work.
> The start-up company Nicira, which emerged from the NSF-funded Ethane
project, developed the Network Virtualization Platform (NVP)26 to meet
this need
Which seems to have _zero_ mentions outside of academic papers.
wmf wrote 5 hours 18 min ago:
Nicira NVP is now VMware NSX which is pretty successful.
AWS/GCP/Azure VPC are also probably inspired by Nicira.
xxpor wrote 5 hours 39 min ago:
>Which seems to have _zero_ mentions outside of academic papers.
Nicira or NVP?
SurceBeats wrote 7 hours 29 min ago:
The 10-year research-to-production timeline is the key lesson. Today's
funding (VC or government grants) demands results in 2-3 years. We've
systematically eliminated the "patient capital" that creates
foundational infrastructure imho...
rjzzleep wrote 42 min ago:
In other words China's success is in part similar what used to make
the US successful. Any lessons to be taken from that? No.
throwup238 wrote 36 min ago:
The Chinese follow a five year cycle: [1] The pattern is adopted
from the Soviet Union. Take whatever lessons from that you will.
URI [1]: https://en.wikipedia.org/wiki/Five-year_plans_of_China
zorked wrote 14 min ago:
Oh, so onimous. Your claim is that the number 5 is cursed because
it is a communist number?
There is no lesson to take from the number 5. There are lessons
to take from longer term planning.
throwup238 wrote 9 min ago:
> Your claim is that the number 5 is cursed because it is a
communist number?
Thatâs your own nonsensical strawman, not mine. Who in their
sane mind would stoop to such silly numerology? That sounds
positively medieval.
What were you even thinking when you wrote your reply? Youâre
going to really have to unpack your thought process for me to
understand what you said.
chuckadams wrote 6 hours 50 min ago:
To say nothing of systematically eliminating the foundational
infrastructure for nationally funded science in general.
zdw wrote 7 hours 45 min ago:
I worked with a quite few of the folks mentioned in this article when I
was at the Open Networking Foundation, if anyone has questions.
matt_daemon wrote 5 hours 24 min ago:
What's your view on how these people actually impacted the adoption
of SDN in general?
> The investments NSF made in SDN over the past two decades have paid
huge dividends.
In my view this seems a little overblown. The general idea of
separation of control and data plane is just that - an idea. In
practice, none of the early firms (like Nicira) have had any
significant impact on what's happening in industry. Happy to be
corrected if that's not accurate!
zdw wrote 2 hours 53 min ago:
Depends where you are in the industry - the hyperscalers
specifically have budget to afford a team to write P4 or other SDN
code to manage their networks in production, so they're probably
the biggest beneficiaries.
Lower end, it did make programmability more accessible to more
folks and enabled whitebox switches to compete against entrenched
players to a far greater extent than previously possible. Again,
hyperscalers are going to be the main folks who buy this kind of
gear and run SONiC or similar on it, so they can own the full
switch software stack.
Many of the startup companies in the SDN space did have successful
exits into larger players - for example Nicira into VMWare,
Barefoot (Tofino switch chip) and Ananki (the ONF 4G/5G spinoff)
into Intel. Also, much of the software was developed as open
source, and is still out there to be used and built on.
heathermiller wrote 8 hours 43 min ago:
what a wonderful chronicle of how esoteric research became
not-esoteric, and truly world-changing, and how the NSF enabled it
pour one out for the NSF folks. RIP
Animats wrote 10 hours 2 min ago:
> a network should have logically centralized control, where the
control software has network-wide visibility and direct control across
the distributed collection of network devices.
Including a backdoor for wiretapping in SDN-enabled routers.
acdha wrote 7 hours 9 min ago:
Is it really a âback doorâ when itâs controlled by the network
owner? It feels like we need a different term for that since itâs
increasingly common on large networks.
Animats wrote 3 hours 36 min ago:
The question is who can send commands as network owner. The basic
idea of SDR is that when A wants to talk to B, a message is sent to
some control point to determine the path. The path is then sent
down to the routers along the path. Packets which ordinarily would
go nowhere near eavesdropping point C can be redirected to go
through C, on a per A/B pair basis.
blackmanta wrote 8 hours 30 min ago:
Unless the goal of the backdoor is to redirect traffic flows through
packet inspection devices that the attacker also controls, the
decoupling of the control and data plane in SDN deployments requires
a more creative, intricate solution to allow for wiretapping compared
to traditional routers.
DIR <- back to front page