svn commit: r50497 - head/en_US.ISO8859-1/htdocs/news/status

Benjamin Kaduk bjk at FreeBSD.org
Sat Jul 15 00:47:55 UTC 2017


Author: bjk
Date: Sat Jul 15 00:47:54 2017
New Revision: 50497
URL: https://svnweb.freebsd.org/changeset/doc/50497

Log:
  Add 2017Q2 Ceph entry from Willem Jan Withagen

Modified:
  head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml

Modified: head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml
==============================================================================
--- head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml	Sat Jul 15 00:22:08 2017	(r50496)
+++ head/en_US.ISO8859-1/htdocs/news/status/report-2017-04-2017-06.xml	Sat Jul 15 00:47:54 2017	(r50497)
@@ -1021,4 +1021,145 @@
 	to upstream when they break TensorFlow on &os;.</task>
     </help>
   </project>
+
+  <project cat='proj'>
+    <title>Ceph on &os;</title>
+
+    <contact>
+      <person>
+	<name>
+	  <given>Willem Jan</given>
+	  <common>Withagen</common>
+	</name>
+	<email>wjw at digiware.nl</email>
+      </person>
+    </contact>
+
+    <links>
+      <url href="http://ceph.com">Ceph Main Site</url>
+      <url href="https://github.com/ceph/ceph">Main Repository</url>
+      <url href="https://github.com/wjwithagen/ceph">My &os; Fork </url>
+    </links>
+
+    <body>
+      <p>Ceph is a distributed object store and file system designed to provide
+	excellent performance, reliability and scalability.</p>
+
+      <ul>
+	<li><p>Object Storage</p>
+
+	  <p>Ceph provides seamless access to objects using native
+	    language bindings or <tt>radosgw</tt>, a REST interface
+	    that is compatible with applications written for S3 and
+	    Swift.</p></li>
+
+	<li><p>Block Storage</p>
+
+	  <p>Ceph's RADOS Block Device (RBD) provides access to block
+	    device images that are striped and replicated across the
+	    entire storage cluster.</p></li>
+
+	<li><p>File System</p>
+
+	  <p>Ceph provides a POSIX-compliant network file system that
+	    aims for high performance, large data storage, and maximum
+	    compatibility with legacy applications.</p></li>
+      </ul>
+
+      <p>I started looking into Ceph because the HAST solution with
+	CARP and <tt>ggate</tt> did not really do what I was looking
+	for.  But I aim to run a Ceph storage cluster of storage nodes
+	that are running ZFS.  User stations would be running
+	<tt>bhyve</tt> on RBD disks that are stored in Ceph.</p>
+
+      <p>Compiling for &os; will now build most of the tools
+	available in Ceph.</p>
+
+      <p>The most important changes since the last report are:</p>
+
+      <ul>
+	<li>Ceph has releassed the release candidate of v12.1.0 (aka
+	  Luminous); the corresponding packaging is sitting in my tree
+	  waiting for Luminous to be actually released.</li>
+
+	<li><tt>ceph-fuse</tt> works, and allows mounting of
+	  <tt>cephfs</tt> filesystems.  The speed is not impressive,
+	  but it does work.</li>
+
+	<li><tt>rbd-ggate</tt> is available to create a Ceph
+	  <tt>rdb</tt> backed device.  <tt>rbd-ggate</tt> was
+	  submitted by Mykola Golub.  That works in a rather simple
+	  fashion, once a cluster is functioning, with <tt>rdb
+	  import</tt> and <tt>rdb-gate map</tt> creating
+	  <tt>ggate</tt>-like devices backed by the Ceph cluster.</li>
+      </ul>
+
+      <p>Other improvements since the previous report:</p>
+
+      <ul>
+	<li>Some bugs in the <tt>init-ceph</tt> code (needed for
+	  <tt>rc.d</tt>) are being fixed.</li>
+
+	<li>RBD and rados are functioning.</li>
+
+	<li>The needed compatability code was written so that &os; and
+	  Linux daemons can operate together in a single cluster.</li>
+
+	<li>More of the awkward dependancies on Linux-isms are deleted
+	  —only <tt>/bin/bash</tt> is there to stay.</li>
+      </ul>
+
+      <p>Looking forward, the next official release of Ceph is called
+	Luminous (v12.1.0).  As soon as it is available from upstream,
+	a port will be made provided for &os;.</p>
+
+      <p>To get things running on a &os; system, run <tt>pkg install
+	  net/ceph-devel</tt> or clone <a
+	  href="https://github.com/wjwithagen/ceph">https://github.com/wjwithagen/ceph</a>,
+	check out the <tt>wip.freebsd.201707</tt> branch, and build
+	manually by running <tt>./do_freebsd.sh</tt> in the checkout
+	root.</p>
+
+      <p>Parts not (yet) included:</p>
+
+      <ul>
+	<li>KRBD — but <tt>rbd-ggate</tt> is usable in its
+	  stead</li>
+
+	<li>BlueStore — &os; and Linux have different AIO APIs,
+	  and that incompatibility needs to be resolved somehow.
+	  Additionally, there is discussion in &os; about
+	  <tt>aio_cancel</tt> not working for all device types.</li>
+      </ul>
+    </body>
+
+    <help>
+      <task>Run integration tests to see if the &os; daemons will work
+	with a Linux Ceph platform.</task>
+
+      <task>Investigate the keystore, which can be embedded in the
+	kernel on Linux and currently prevents building Cephfs and
+	some other parts.  The first question is whether it is really
+	required, or only KRBD requires it.</task>
+
+      <task>Scheduler information is not used at the moment, because the
+	schedulers work rather differently between Linux and &os;.
+	But at a certain point in time, this will need some attention
+	(in <tt>src/common/Thread.cc</tt>).</task>
+
+      <task>Improve the &os; init scripts in the Ceph stack, both for
+	testing purposes and for running Ceph on production machines.
+	Work on <tt>ceph-disk</tt> and <tt>ceph-deploy</tt> to make it
+	more &os;- and ZFS-compatible.</task>
+
+      <task>Build a test cluster and start running some of the
+	teuthology integration tests on it.  Teuthology wants to build
+	its own <tt>libvirt</tt> and that does not quite work with all
+	the packages &os; already has in place.  There are many
+	details to work out here.</task>
+
+      <task>Design a vitual disk implementation that can be used with
+	<tt>bhyve</tt> and attached to an RBD image.</task>
+    </help>
+  </project>
 </report>


More information about the svn-doc-all mailing list