svn commit: r49850 - head/en_US.ISO8859-1/htdocs/news/status
Benjamin Kaduk
bjk at FreeBSD.org
Sat Jan 14 22:42:07 UTC 2017
Author: bjk
Date: Sat Jan 14 22:42:06 2017
New Revision: 49850
URL: https://svnweb.freebsd.org/changeset/doc/49850
Log:
Add Ceph entry from Willem Jan Withagen
Modified:
head/en_US.ISO8859-1/htdocs/news/status/report-2016-10-2016-12.xml
Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2016-10-2016-12.xml
==============================================================================
--- head/en_US.ISO8859-1/htdocs/news/status/report-2016-10-2016-12.xml Sat Jan 14 19:19:39 2017 (r49849)
+++ head/en_US.ISO8859-1/htdocs/news/status/report-2016-10-2016-12.xml Sat Jan 14 22:42:06 2017 (r49850)
@@ -888,4 +888,176 @@
couple of weeks.</p>
</body>
</project>
+
+ <project cat='proj'>
+ <title>Ceph on &os;</title>
+
+ <contact>
+ <person>
+ <name>
+ <given>Willem Jan</given>
+ <common>Withagen</common>
+ </name>
+ <email>wjw at digiware.nl</email>
+ </person>
+ </contact>
+
+ <links>
+ <url href="http://ceph.com">Ceph Main Site</url>
+ <url href="https://github.com/ceph/ceph">Main Repository</url>
+ <url href="https://github.com/wjwithagen/ceph/tree/wip.&os;">My &os; Fork </url>
+ </links>
+
+ <body>
+ <p>Ceph is a distributed object store and file system designed
+ to provide excellent performance, reliability and
+ scalability:</p>
+
+ <ul>
+ <li><p>Object Storage</p>
+
+ <p>Ceph provides seamless access to objects using native
+ language bindings or radosgw, a REST interface that’s
+ compatible with applications written for S3 and
+ Swift.</p></li>
+
+ <li><p>Block Storage</p>
+
+ <p>Ceph’s RADOS Block Device (RBD) provides access to block
+ device images that are striped and replicated across the
+ entire storage cluster.</p></li>
+
+ <li><p>File System</p>
+
+ <p>Ceph provides a POSIX-compliant network file system that
+ aims for high performance, large data storage, and maximum
+ compatibility with legacy applications.</p></li>
+ </ul>
+
+ <p>I started looking into Ceph because the HAST solution with
+ CARP and <tt>ggate</tt> did not really do what I was looking
+ for. But I aim to run a Ceph storage cluster of storage nodes
+ that are running ZFS. User stations would be running
+ <tt>bhyve</tt> on RBD disks that are stored in Ceph.</p>
+
+ <p>The &os; build will build most of the tools in Ceph. Note
+ that the RBD dependant items will not work, since &os; does not
+ have RBD (yet).</p>
+
+ <p>Most notable progress since the last report:</p>
+
+ <ul>
+ <li>RBD is actually buildable and can be used to manage
+ <tt>RADOS BLOCK DEVICE</tt>s.</li>
+
+ <li>All tests run to completion for the current selection of
+ tools, though the neded (minor) patches have yet to be
+ pulled into HEAD.</li>
+
+ <li>Cmake is now the only way of building Ceph.</li>
+
+ <li>The threading/polling code has been reworked for the
+ simple socket code. It now uses a self-pipe, instead of using
+ an odd <tt>shutdown()</tt>-signaling Linux feature.</li>
+
+ <li>The EventKqueue code was modified to work around the
+ "feature" that starting threads destroys the kqueue
+ handles. The code was just finshed, so it is not yet
+ submitted to the main repository.</li>
+
+ <li>We investigated differences between &os; and Linux for
+ <tt>SO_REUSEADDR</tt> and <tt>SO_REUSEPORT</tt>. Fortunately,
+ the code is only used during testing, so disabling these
+ features only delays progress in the tests.</li>
+
+ <li>A jenkins instances is regularly testing both
+ <tt>ceph/ceph/master</tt> and
+ <tt>wjwithagen/ceph/wip.FreeBSD</tt>, so there is regular
+ verification of buildability and the tests: <a
+ href="http://cephdev.digiware.nl:8180/jenkins/">http://cephdev.digiware.nl:8180/jenkins/</a>.</li>
+ </ul>
+
+
+ <p>Build Prerequisites</p>
+
+ <p>Compiling and building Ceph is tested on 12-CURRENT with
+ its clang 3.9.0, but 11-RELEASE will probably also work, given
+ experience with clang 3.7.0 from 11-CURRENT. Interestingly,
+ when 12-CURRENT had clang 3.8.0, that did not work as well as
+ either 3.7.0 or 3.9.0..</p>
+
+ <p>The clang 3.4 present in 10-STABLE does not have the
+ required capabilities to compile everything.</p>
+
+ <p>The following setup will get things running for &os;:</p>
+
+ <ol>
+ <li>Install bash and link it in <tt>/bin</tt></li>
+
+ <li>It is no longer necessary to add a definition of
+ <tt>ENODATA</tt> to <tt>/usr/include/errno.h</tt></li>
+
+ <li>Clone the github repo
+ (http://github.com/wjwithagen/ceph.git) and checkout the
+ "wip.FreeBSD" branch</li>
+
+ <li>Run <tt>./do_FreeBSD.sh</tt> to start the build.</li>
+ </ol>
+
+ <p>The old build method using automake is no longer used; see
+ the README.FreeBSD for more details.</p>
+
+ <p>Parts not (yet) included:</p>
+
+ <ul>
+ <li>KRBD: Kernel Rados Block Devices is implemented in the
+ Linux kernel, but not in the &os; kernel. Perhaps
+ <tt>ggated</tt> could be used as a template since it does some
+ of the same things as KRBD, just between 2 disks. It also has
+ a userspace counterpart, which could ease development.</li>
+
+ <li>BlueStore: &os; and Linux have different AIO APIs, and
+ that incompatibility needs to be resolved somehow.
+ Additionally, there is discussion in &os; about
+ <tt>aio_cancel</tt> not working for all devicetypes.</li>
+
+ <li>CephFS: Cython tries to access an internal field in
+ <tt>struct dirent</tt>, which does not compile.</li>
+
+ <li>Tests that verify the correct working of the above are
+ also excluded from the testset.</li>
+ </ul>
+ </body>
+
+ <help>
+ <task>Run integration tests to see if the &os; daemons will work
+ with a Linux Ceph platform.</task>
+
+ <task>Compile and test the user space RBD (Rados Block Device).
+ This currently works, but testing has been limitted.</task>
+
+ <task>Investigate and see if an in-kernel RBD device could be
+ developed akin to &os;'s <tt>ggate</tt>.</task>
+
+ <task>Investigate the keystore, which could be embedded in the
+ kernel on Linux, and currently prevents building Cephfs and some
+ other components. The first question whether it is really
+ required, or if only KRBD require it.</task>
+
+ <task>Scheduler information is not used at the moment, because
+ the schedulers work rather differently between &os; and Linux.
+ But a a certain point in time, this would need some attention in
+ <tt>src/common/Thread.cc</tt>.</task>
+
+ <task>Integrate the &os; <tt>/etc/rc.d</tt> initscripts in the
+ Ceph stack. This helps with testing, but also enables running
+ Ceph on production machines.</task>
+
+ <task>Build a testcluster and start running some of the
+ <tt>teuthology</tt> integration tests on it.</task>
+
+ <task>Design a vitual disk implementation that can be used with
+ <tt>bhyve</tt> and attached to an RBD image.</task>
+ </help>
+ </project>
</report>
More information about the svn-doc-all
mailing list