As many of you already know, I’ve left Likewise last week (April 15th). It has been an exhilarating four and a half years doing some pretty great things with some really terrific people. We built what I am convinced will be the definitive SMB/CIFS stack for the non-Windows world in a record 15 months . What an achievement! I will miss working with my Likewise team and I wish them the best of luck and good fortune as they move forward.
So what’s next for an old hand at distributed systems? Hopefully, reach for the “clouds” … I’m usually pretty cautious/skeptical when it comes to analyzing new trends, but this one seems pretty obvious. To say that the cloud promises to be disruptive would be banal - it is the new “New World” for distributed computing. I think success will theirs who discover the most optimal route to this New World. And I want to be part of that discovery process.
So I’m going to take a few weeks off, but get busy figuring out how best I can get involved in the new cloud world.
To all of you who read my blog, thank you very much. Hopefully, I will have interesting things to talk about in the future.
One of things I’ve been lax on is creating a Platform SDK for the Likewise Open Project. I’m happy to inform you that we are in the midst of a build restructure that should wrap up by the end of the week. The result of this restructure will be that you can now install devel-packages for Likewise Open. All headers and libraries will be installed in the /opt/likewise/[include|lib].
Following which you can write your own DCE/RPC client and server programs. You can write LWIO drivers for the LWIO platform – you can create an NFS server or a WebDAV server or an FTP server. You can write your own backend to leverage the LWIO CIFS stack – say you want to write an LWIO driver for the ZFS file system. All you would need to do is install the Likewise Open Platform SDK – install the devel packages and start coding.
Another great advantage is we’ve significantly reduce build times of the Likewise Open Platform. Jerry reports with our first wave of componentization, the entire Platform builds in 22 minutes on a quad-core 4GB machine. It used to take around 30 minutes. We think that we can bring this down another 25% – so stay tuned.
My own particular requirement was that OEMs should be able to pull down the Likewise Open tree and easily get up and running. I think we’re getting there.
This effort should wrap by the end of this week.
One of the more thorny problems in building Linux-Windows interoperability technology has been designing the domain join process. The problem has more to do with how should you marry two different operating systems – the Linux world and the Windows AD world? The first problem was always how do you get the machine name from the Linux machine to pass to the Active Directory domain controller.
In earlier versions, we would grab this information from the Linux/UNIX/Mac machine and pass it to our authentication engine. The problem is that getting this information is very painful when you need to do this across disparate UNIX operating systems.
As the Likewise Open architecture has evolved it is now a full-fledged development and programming environment to the point where an end user ought to be able to specify a machine name directly to the authentication engine which then stores that information in the Likewise Registry. Thus the authentication engine’s domain join process is rightly decoupled entirely from the Linux/UNIX/Mac hostname files and other configuration files.
My development station could be krishnag-ubuntu.likewise.com, but I could join my machine to the Active Directory domain as foo.likewise.com. In the real world, I would join my machine as krishnag-ubuntu.likewise.com, but the decoupling of the UNIX system name from the Active Directory FQDN and NETBIOS Name, provides a useful architectural separation/”join” point between the two worlds.
It is something we’ll explore further as the system evolves.
I guess this is a long roundabout way arguing for modular and simple architecture in our system. As the amount of code in the Likewise Open project grows, we will need to be mindful that the system is easily maintainable and manageable. I’m going to continue this theme on another post…
From: Gerald (Jerry) Carter
Sent: Wednesday, March 03, 2010 6:03 AM
Subject: Re: Morning connection testing
And the magic 10k….
23904 39.1 00:12:19 1557200 916072 /opt/likewise/sbin/lwiod –syslog
23946 8.1 00:02:34 591192 6252 /opt/likewise/sbin/lsassd –syslog
23849 31.3 00:09:53 395068 2528 /opt/likewise/sbin/lwregd
24009 0.0 00:00:00 316600 2072 /opt/likewise/sbin/srvsvcd –syslog 23920 6.8 00:02:09 244968 2780 /opt/likewise/sbin/netlogond –syslog
23892 0.0 00:00:00 210440 1696 /opt/likewise/sbin/dcerpcd -f
23847 0.0 00:00:00 178104 752 /opt/likewise/sbin/lwsmd –start-as-daemon
Server statistics [level 0]:
Number of connections: 
Maximum Number of connections: 
Number of sessions: 
Maximum Number of sessions: 
Number of tree connects: 
Maximum Number of tree connects: 
Number of open files: 
Maximum Number of open files: 
Gerald Carter wrote:
> If we could figure out the memory usage in lwio, this would be so
> $ (ps -eo pid,%cpu,cputime,vsz,rss,args | grep likewise.*sbin |\
> grep -v grep | sort -r -k4 -n) && /opt/likewise/bin/lwio-cli
> PID %CPU TIME VSZ RSS COMMAND
> 23904 39.6 00:09:10 1491668 766024 /opt/likewise/sbin/lwiod –syslog
> 23946 9.2 00:02:07 591192 6648 /opt/likewise/sbin/lsassd –syslog
> 23849 35.5 00:08:12 395068 2772 /opt/likewise/sbin/lwregd
> 24009 0.0 00:00:00 316600 2212 /opt/likewise/sbin/srvsvcd –syslog
> 23920 7.7 00:01:47 244968 2756 /opt/likewise/sbin/netlogond –syslog
> 23892 0.0 00:00:00 210440 1708 /opt/likewise/sbin/dcerpcd -f
> 23847 0.0 00:00:00 178104 860 /opt/likewise/sbin/lwsmd –start-as-daemon
> Server statistics [level 0]:
> Number of connections: 
> Maximum Number of connections: 
> Number of sessions: 
> Maximum Number of sessions: 
> Number of tree connects: 
> Maximum Number of tree connects: 
> Number of open files: 
> Maximum Number of open files: 
Senior Software Developer Likewise-CIFS
“What man is a man who does not make the world better?” –Balian
Wei is back from China after a well deserved vacation. She (as always) is rareing to get started on a new project, so I’ve asked her to take point on our net utilities. Stay tuned for the well-known familiar net utility on Linux. We’re not going to create a whole bunch of strange commands, we will make available the well known one’s that Windows users are familiar with – net use, net view, net localgroup, net user, net share, net time.
Adam Bernstein is going to work on smbshell. This utility will be similar to his super popular regshell utility that lets you browse the Likewise Registry. The best parts of regshell were the tab completion support. smbshell will be similar – it will allow you to interactively walk an smbshare and copy files. It is an absolute nightmare to copy files from the commandline in a Linux world and we hope to make this drop dead simple.
We’ll push these out in the 5.4 release which by the way was pushed out to the public git tree – we will need to release binaries quickly.
Thanks for reading
This morning we blasted past 8K concurrent connections to our SMB stack. We were originally trying to get FSCT running and we’re pretty close – we’ve run into client side configuration issues and controller side issues and we’re working with Microsoft to get these resolved. I’m probably going to head up there in the next couple of weeks and figure out how we can collaborate on making sure that fsct is tested against non-Windows SMB servers
In the interim, Brian fixed up our redirector to force it to open multiple connections out to the Likewise CIFS stack. I’m sketchy on details – the idea is to force a single connection on a per user. So 10K users will translate to 10K connections on the server.
Once we did this, Jerry ran the test this morning and we blitzed past 8K SMB connections. We hit this limit because server was hard-wired to do a max of 8K connections. Once we turned this off, we’ve seen this go way past 8K. Our belief is that we will scale to the resource limits on the server.
Note that the test shows that the stack can create the necessary data structures for 8K SMB sessions (or greater). We still need to test i/o throughput numbers. We want to ensure that when you have 10K or greater SMB sessions, given that the server will statisically multiplex (some connections will be doing active I/Os while other connections are idle), the active sessions can deliver reasonable throughput guarantees.
My own system service design schooling was always thread-based systems. And at Microsoft, we always wrote thread based single process systems. So it was surprising to hear from several people in the open source world that threads are bad and evil and you can’t get decent performance guarantees using threads. lwio is our single process multi-threaded architecture and so far we seem to be doing okay. Perhaps we’ll get tripped up somewhere, but right now its looking pretty good.
For a while, we were struggling with interim responses on the SMB2 server infrastructure. But an absolutely herculean effort from Sriram has resulted in our having complete SMB2 file server support. It turns out that we were incorrectly generating the fids for a file handle, causing the Windows smb redirector to get confused.
All 25 of our tier 1 applications pass. See my earlier post on the list of applications that we test against. All these mainstream applications pass.
Sriram figured out how to compute file ids – it turns out that they are good old fashioned GUIDs – a portion of the GUID is the persistent part of the file id and the other portion is the ephemeral part.
We still need to implement SMB2.1 semantics especially the Windows 7 oplock package, but that is a Windows 7 artifact..
So for now, we’re going to enjoy the view from up here.
BTW, feel free to build the lwio stack and enable smb2 support. If you find errors or bugs send them our way Our model is to continually focus on making things better, so all feedback is super appreciated.