Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 justinmburrous joined #gluster
00:31 jbrooks joined #gluster
00:48 overclk joined #gluster
00:54 * ttk is also interested in the answer to Jamoflaw's question
00:58 Zordrak_ joined #gluster
01:04 justinmburrous joined #gluster
01:29 buhman_ joined #gluster
01:30 delhage_ joined #gluster
01:32 siel_ joined #gluster
01:32 masterzen_ joined #gluster
01:32 ueberall joined #gluster
01:33 Andreas-IPO joined #gluster
01:33 ackjewt joined #gluster
01:33 johnmwilliams__ joined #gluster
01:33 johnmwilliams__ joined #gluster
01:33 Lee- joined #gluster
01:34 verdurin joined #gluster
01:34 ws2k3 joined #gluster
01:40 sage__ joined #gluster
01:42 atrius_ joined #gluster
01:45 hflai joined #gluster
01:45 kalzz_ joined #gluster
01:45 Ramereth|home joined #gluster
01:45 johnmwilliams__ joined #gluster
01:45 Andreas-IPO joined #gluster
01:45 frankS2 joined #gluster
01:45 skippy joined #gluster
01:45 frankS2 joined #gluster
01:45 skippy joined #gluster
01:45 johnmwilliams__ joined #gluster
01:45 frankS2 joined #gluster
01:45 mikedep3- joined #gluster
01:45 sac`away joined #gluster
01:45 weykent joined #gluster
01:45 johndescs joined #gluster
01:45 ws2k3 joined #gluster
01:45 haomaiwa_ joined #gluster
01:52 ackjewt joined #gluster
01:53 quique joined #gluster
01:54 bala1 joined #gluster
01:55 haomaiw__ joined #gluster
02:00 sputnik13 joined #gluster
02:05 justinmburrous joined #gluster
02:06 haomaiwa_ joined #gluster
02:15 haomai___ joined #gluster
02:24 justinmburrous joined #gluster
02:25 wgao joined #gluster
02:31 davemc joined #gluster
02:33 _Bryan_ joined #gluster
02:38 calisto joined #gluster
03:37 RameshN joined #gluster
03:43 justinmburrous joined #gluster
03:50 shubhendu joined #gluster
03:54 justinmb_ joined #gluster
03:56 _ndevos joined #gluster
03:56 itisravi joined #gluster
04:05 soumya joined #gluster
04:05 rejy joined #gluster
04:18 nbalachandran joined #gluster
04:25 rafi1 joined #gluster
04:26 Rafi_kc joined #gluster
04:27 kdhananjay joined #gluster
04:27 anoopcs joined #gluster
04:34 jiffin joined #gluster
04:39 atalur joined #gluster
04:41 rjoseph joined #gluster
04:47 ramteid joined #gluster
04:52 ppai joined #gluster
04:53 kanagaraj joined #gluster
04:54 ACiDGRiM joined #gluster
05:08 atinmu joined #gluster
05:14 justinmburrous joined #gluster
05:15 ndarshan joined #gluster
05:22 prasanth_ joined #gluster
05:24 lalatenduM joined #gluster
05:28 nishanth joined #gluster
05:30 spandit joined #gluster
05:38 justinmburrous joined #gluster
05:44 dusmant joined #gluster
05:47 overclk joined #gluster
05:51 Humble joined #gluster
05:52 glusterbot New news from newglusterbugs: [Bug 1141379] Geo-Replication - Fails to handle file renaming correctly between master and slave <https://bugzilla.redhat.com/show_bug.cgi?id=1141379>
05:54 justinmburrous joined #gluster
06:00 shubhendu_ joined #gluster
06:03 saurabh joined #gluster
06:07 rgustafs joined #gluster
06:13 karnan joined #gluster
06:19 shubhendu_ joined #gluster
06:20 dusmant joined #gluster
06:21 Slydder joined #gluster
06:22 ndarshan joined #gluster
06:22 Philambdo joined #gluster
06:23 pkoro_ joined #gluster
06:23 raghu joined #gluster
06:26 Slydder joined #gluster
06:26 Slydder morning all
06:34 Fen1 joined #gluster
06:35 RaSTar joined #gluster
06:42 kshlm joined #gluster
06:50 ctria joined #gluster
06:59 shubhendu_ joined #gluster
07:01 dusmant joined #gluster
07:01 ndarshan joined #gluster
07:02 atalur joined #gluster
07:09 atalur joined #gluster
07:11 kumar joined #gluster
07:12 SOLDIERz joined #gluster
07:13 SOLDIERz Hello everyone I got q question at the moment I got the problem when i try to peering nodes so: gluster peer probe 127.0.0.1
07:14 atalur joined #gluster
07:14 SOLDIERz but the nodes dont see each other in the gluster so tehy dont build a gluster and i dont know why
07:17 Slydder you can't use 127.0.0.1 when peering multiple nodes.
07:19 rjoseph joined #gluster
07:19 SOLDIERz 127.0.0.1 was just an example
07:19 SOLDIERz well if igured out the bug
07:19 atinmu joined #gluster
07:22 dusmant joined #gluster
07:22 glusterbot New news from newglusterbugs: [Bug 1152890] Peer probe during rebalance causing "Peer rejected" state for an existing node in trusted cluster <https://bugzilla.redhat.com/show_bug.cgi?id=1152890>
07:28 rgustafs joined #gluster
07:40 ricky-ticky joined #gluster
07:44 justinmburrous joined #gluster
07:47 justinmburrous joined #gluster
07:48 justinmburrous joined #gluster
07:49 davidhadas joined #gluster
07:52 glusterbot New news from newglusterbugs: [Bug 1152902] Rebalance on a dispersed volume produces multiple errors in logs <https://bugzilla.redhat.com/show_bug.cgi?id=1152902> || [Bug 1152903] Rebalance on a dispersed volume produces multiple errors in logs <https://bugzilla.redhat.com/show_bug.cgi?id=1152903>
07:55 sputnik13 joined #gluster
08:07 anands joined #gluster
08:09 Thilam joined #gluster
08:12 atinmu joined #gluster
08:14 davidhadas_ joined #gluster
08:16 dusmant joined #gluster
08:25 justinmburrous joined #gluster
08:26 Slashman joined #gluster
08:30 aravindavk joined #gluster
08:38 rjoseph joined #gluster
08:43 ricky-ticky1 joined #gluster
08:49 dusmant joined #gluster
08:50 nishanth joined #gluster
08:51 ndarshan joined #gluster
08:55 monotek joined #gluster
08:57 vikumar joined #gluster
09:05 itisravi joined #gluster
09:11 deepakcs joined #gluster
09:12 Fen1 joined #gluster
09:21 Slydder semiosis: you there?
09:26 justinmburrous joined #gluster
09:27 XpineX joined #gluster
09:35 dusmant joined #gluster
09:38 pkoro_ joined #gluster
09:41 gildub joined #gluster
09:42 spandit joined #gluster
09:43 lalatenduM joined #gluster
09:52 ndarshan joined #gluster
09:52 nishanth joined #gluster
09:53 RaSTar joined #gluster
09:53 glusterbot New news from newglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.com/show_bug.cgi?id=1152956> || [Bug 1152957] arequal-checksum mismatch between before and after successful heal on a replaced disk <https://bugzilla.redhat.com/show_bug.cgi?id=1152957>
09:55 dusmant joined #gluster
10:03 Slydder semiosis:  getting ready to start deadlock test of ganesha nfs server under heavy load. wish me luck.
10:10 lalatenduM_ joined #gluster
10:11 ricky-ticky joined #gluster
10:14 RaSTar joined #gluster
10:15 soumya Slydder, good luck :) ..Please keep us in loop when you post the results. Our internal ganesha mailing list is "nfs-ganesha@redhat.com"
10:15 soumya thanks
10:20 dusmant joined #gluster
10:21 rjoseph joined #gluster
10:21 atinmu joined #gluster
10:27 justinmburrous joined #gluster
10:45 spandit joined #gluster
10:48 ricky-ticky1 joined #gluster
10:55 dblack joined #gluster
10:56 deepakcs joined #gluster
11:00 edward1 joined #gluster
11:00 gildub joined #gluster
11:00 ppai joined #gluster
11:01 ppai joined #gluster
11:03 LebedevRI joined #gluster
11:09 julim joined #gluster
11:17 nshaikh joined #gluster
11:23 ira joined #gluster
11:28 justinmburrous joined #gluster
11:35 rjoseph joined #gluster
11:36 davidhadas_ joined #gluster
11:36 overclk joined #gluster
11:37 atinmu joined #gluster
11:38 Philambdo joined #gluster
11:44 _ndevos joined #gluster
11:45 bene joined #gluster
11:56 tdasilva joined #gluster
12:01 virusuy joined #gluster
12:01 virusuy joined #gluster
12:01 dguettes joined #gluster
12:05 anands joined #gluster
12:07 kdhananjay joined #gluster
12:08 kanagaraj joined #gluster
12:13 rgustafs joined #gluster
12:17 B21956 joined #gluster
12:18 B21956 joined #gluster
12:28 justinmburrous joined #gluster
12:45 theron joined #gluster
12:47 ira joined #gluster
12:54 chirino joined #gluster
12:58 tdasilva joined #gluster
13:04 lalatenduM joined #gluster
13:06 rjoseph joined #gluster
13:06 RaSTar joined #gluster
13:17 mojibake joined #gluster
13:18 ppai joined #gluster
13:27 anands joined #gluster
13:30 nshaikh joined #gluster
13:36 sputnik13 joined #gluster
13:38 bennyturns joined #gluster
13:39 harish joined #gluster
13:42 Slydder semiosis: tests are looking good for ganesha. still no deadlocks.
13:46 kshlm joined #gluster
13:48 dguettes joined #gluster
14:07 Slydder I friggin LOVE ganesha. have never seen such speed in gluster.
14:13 Guest32622 joined #gluster
14:15 calisto joined #gluster
14:20 ron-slc joined #gluster
14:22 theron joined #gluster
14:23 jbautista- joined #gluster
14:24 firemanxbr joined #gluster
14:25 davidhadas_ joined #gluster
14:29 _Bryan_ joined #gluster
14:29 theron joined #gluster
14:30 justinmburrous joined #gluster
14:31 calisto1 joined #gluster
14:35 Slydder joined #gluster
14:36 theron joined #gluster
14:36 davemc joined #gluster
14:39 theron joined #gluster
14:41 Thilam hi, do you have a ideo on when the 3.5.3 gluster packages for debian will be available?
14:43 msmith_ joined #gluster
14:45 MrNaviPacho joined #gluster
14:45 firemanxbr joined #gluster
14:52 Fen1 i don't understand what is exacly NFS ganesha ? :/
14:54 plarsen joined #gluster
14:57 Slydder Fen1: user-space nfs server.
14:57 Slydder with glusterfs support.
14:57 Fen1 Slydder: and what is the difference between nfs ?
14:58 Slydder 1. I have yet to get a loopback deadlock. 2. no special privileges needed when running in a container.
15:00 Slydder 3. faster than fuse when overwriting existing files (at least by me).
15:03 Fen1 faster than fuse ? hum... ok
15:03 Fen1 how much ? (mb/s)
15:03 Slydder on my boxes it is. when I use fuse I have 5-15 second pauses by overwrites. very annoying.
15:05 ctria joined #gluster
15:09 edwardm61 joined #gluster
15:15 lalatenduM joined #gluster
15:17 scuttle|afk joined #gluster
15:23 daMaestro joined #gluster
15:26 Ramereth joined #gluster
15:35 semiosis Slydder: thats great to hear
15:35 semiosis Thilam: today
15:38 theron joined #gluster
15:39 theron joined #gluster
15:41 sputnik13 joined #gluster
15:42 Thilam semiosis : thx
15:45 rwheeler joined #gluster
15:45 doo joined #gluster
15:49 justinmburrous joined #gluster
16:06 jobewan joined #gluster
16:09 jbrooks joined #gluster
16:17 rjoseph joined #gluster
16:31 kkeithley gluster's nfs is user-space nfs too. nfs-ganesha has nfsv4, nfsv4.1, and will eventually have pNFS.
16:32 kkeithley gluster's nfs is nfsv3 only.
16:32 semiosis kkeithley++
16:32 glusterbot semiosis: kkeithley's karma is now 19
16:33 glusterbot kkeithley++
16:33 semiosis so glusterbot doesn't acknowledge it's own karma
16:35 kkeithley huh?
16:36 semiosis i had glusterbot give you karma to see if it would react to its own message
16:36 semiosis it did not
16:38 RameshN joined #gluster
16:38 kkeithley ah
16:43 Alpinist joined #gluster
16:46 semiosis @later tell Slydder soumya asked in #gluster-dev if you could email the gluster-devel mailing list about your experience with ganesha.
16:46 glusterbot semiosis: The operation succeeded.
16:49 skippy is Ganesha that much fastert han native Gluster?
16:51 dtrainor joined #gluster
16:52 john_locke joined #gluster
16:52 dtrainor joined #gluster
16:52 skippy are there docs online for hooking up Gluster with Ganesha?
16:53 ndevos there is a blog post about it...
16:53 john_locke hello guys I have centos 6.5 and the gluster version is glusterfs-3.4.0.57rhs-1.el6_5.x86_64 is that OK? Any problems with that? I will use glusterfs in production servers
16:53 ndevos skippy: http://blog.gluster.org/2014/09/glusterfs-and-nfs-ganesha-integration/
16:53 glusterbot Title: GlusterFS and NFS-Ganesha integration | Gluster Community Website (at blog.gluster.org)
16:54 soumya semiosis++
16:54 glusterbot soumya: semiosis's karma is now 2000005
16:54 skippy thanks ndevos !
16:54 ndevos soumya++
16:54 glusterbot ndevos: soumya's karma is now 1
16:54 semiosis @learn ganesha as http://blog.gluster.org/2014/09/glusterfs-and-nfs-ganesha-integration/ and also https://forge.gluster.org/nfs-ganesha-and-glusterfs-integration/pages/Home
16:54 glusterbot semiosis: The operation succeeded.
16:54 semiosis @ganesha
16:54 glusterbot semiosis: http://blog.gluster.org/2014/09/glusterfs-and-nfs-ganesha-integration/ and also https://forge.gluster.org/nfs-ganesha-and-glusterfs-integration/pages/Home
16:55 JoeJulian skippy: Also, it's not that it's "faster" it's that the kernel caches file information and doesn't look it up again. That can speed up some operations but at some costs of consistency and throughput.
16:56 skippy consistency is rather important.  I'd rather have that over speed, I think.
16:59 john_locke This looks pretty nice!
17:02 john_locke Anyone who have experience using glusterfs with windows clients? I would like to hire someone with experience to implement a tweaked glusterfs to run with good performance, good hardware all supermicro servers.
17:02 john_locke all clients are windows clients
17:05 theron joined #gluster
17:13 JoeJulian Not volunteering but I do want to point out that "good" is not a contractible term.
17:19 kkeithley john_locke: centos 6.5 and the gluster version is glusterfs-3.4.0.57rhs-1.el6_5.x86_64 is RHS, client side only. You'll want to get the latest bits, 3.4.5 or 3.5.2) from download.gluster.org or from the CentOS Storage SIG if you're going to deploy servers in production
17:21 nage joined #gluster
17:22 JoeJulian In other words, that version is the client for Red Hat Storage, not for the upstream version. Although that client might be compatible, it's recommended to use the client that matches the servers you're using.
17:29 davemc joined #gluster
17:29 zerick joined #gluster
17:29 charta joined #gluster
17:39 nshaikh joined #gluster
17:42 Fen1 joined #gluster
17:42 julim joined #gluster
17:43 ricky-ti1 joined #gluster
17:55 ekuric joined #gluster
17:55 SOLDIERz_ joined #gluster
17:56 systemonkey joined #gluster
18:09 RameshN joined #gluster
18:18 lpabon joined #gluster
18:29 john_locke thank you kkeithley: I will download and install the one you gave me, JoeJulian So is better to change to a newer one and also the clients, or just use the one for centos?
18:29 systemonkey joined #gluster
18:32 JoeJulian Are you a Red Hat Storage customer?
18:33 JoeJulian john_locke: ^
18:33 john_locke JoeJulian, No, I have centos installed on 12 servers
18:34 john_locke but no red hat customer, I don't know if they will pay for that, the manager is wanting to pay someone in Elance
18:34 JoeJulian Then use the bits from download.gluster.org. Use yum priorities to ensure that the gluster yum repo has a higher priority than base.
18:38 charta joined #gluster
19:11 ilbot3 joined #gluster
19:11 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.org/p/GlusterFS-3.6-test-doc
19:17 john_locke JoeJulian, Thank you Sr. Do you know the average pay from red hat to install a glusterfs system? in 3 servers perhaps....?
19:19 john_locke Need some realistic numbers to ask my boss to contract directly...
19:19 JoeJulian No clue. I just learn everything I can and do it myself.
19:19 JoeJulian @commercial
19:19 glusterbot JoeJulian: Commercial support of GlusterFS is done as Red Hat Storage, part of Red Hat Enterprise Linux Server: see https://www.redhat.com/wapps/store/catalog.html for pricing also see http://www.redhat.com/products/storage/ .
19:19 JoeJulian Let me know if that info is no longer accurate.
19:21 sijis is writing about 5G of data to a volume taking ~6hrs. is that normal? there are about 15-20k files.
19:25 sijis \
19:32 semiosis sijis: define normal
19:32 semiosis everyone's setup is different
19:34 sijis semiosis: i'm using base installation, no special settings. 2 node cluster, replication.
19:34 sijis (not geo-replication)
19:36 JoeJulian It's probably normal for some hardware and/or networks.
19:36 john_locke JoeJulian, Thanks!!!
19:38 sijis is there a general suggestion on what could improve it?
19:38 JoeJulian Faster processors, networks, lower latency connections... probably in reverse order...
19:39 firemanxbr joined #gluster
19:40 sijis vms in the same network. latecency is ~0.500ms. same vlan. 2GB ram, 1 cpu
19:42 john_locke I have a 4U supermicro server with 3 areca cards, I know how to format the arrays in zfs, but in gluster I don't know how to grab this raid5 aray and make it like a brick, so I can have 3 bricks from this server...
19:42 JoeJulian vms? Are you writing to a mounted volume, or are you writing to a vm's filesystem stored in a qcow2 file on a gluster mount?
19:43 sijis vms in vmware using local filesystem. nothing mounted or shared.
19:43 john_locke in some tutorials you point to /dev/sd1 but in here or at least in zfs you point to the PCI:000:000 stuff anf works,
19:43 john_locke No yet, just trying to format with XFS
19:44 john_locke later in the steps comes the mounting and the volume stuff
19:44 sijis JoeJulian: yes. virtual machines
19:45 JoeJulian john_locke: a brick is a mounted, formatted filesystem.
19:47 john_locke oh! OK I see in the glusterfs tutorial   /dev/sdb1 /data/brick1 xfs defaults 1 2 and them mount -a && mount. instead of /dv/sdb1 if i want to point to a raid card, how that is done?
19:48 john_locke each  card shows in by-path something like this: pci-0000:05:00.0-scsi-0:0:0:0
19:49 john_locke in zfs I create pointing at each card with those numbers, to one of each, and they create a big volume pool. tried to do that with gluster and doesn't work that way
19:50 xavih joined #gluster
19:57 JoeJulian sijis: Well there's some extra overhead. I wouldn't use qcow2 if I was worried about performance. I would also use libgfapi (assuming qemu). If you really want better file performance, just mount a gluster volume within your VM and use that directly for file storage.
19:59 JoeJulian john_locke: Right. If you're using zfs, make your zfs filesystem, mount it somewhere (eg /mnt/zfs1) and use a subdirectory of that mountpoint for your glusterfs brick (eg /mnt/zfs1/brick)
20:00 john_locke JoeJulian, and that would be it? I have one samba pointing there, once gluster mounted, no problems?
20:01 john_locke I mean it uses gluster now instead of zfs? I am looking to share this with nfs instead of samba.
20:02 JoeJulian Sure. Once you have a gluster volume, mount that volume as nfs.
20:03 sijis JoeJulian: not using qemu or qcow. (all vms). gluster1, gluster2 have a volume named app1. then on vm-app-01 server, we have the volume mounted as /mnt/app1. that's the setup you are referring to, no?
20:04 JoeJulian possibly. It's hard to say since you keep changing your mind. :P
20:04 JoeJulian bbiab
20:05 sijis JoeJulian: all these vms are on Vmware ESXi 5.5 on a vmfs datastore
20:05 sijis JoeJulian: i was just saying that's the setup i have
20:09 rwheeler joined #gluster
20:11 dtrainor joined #gluster
20:12 dtrainor_ joined #gluster
20:13 giannello joined #gluster
20:13 SOLDIERz_ joined #gluster
20:13 dtrainor_ joined #gluster
20:16 charta joined #gluster
20:18 frayz joined #gluster
20:39 theron joined #gluster
20:41 theron joined #gluster
20:43 semiosis JoeJulian: i think john_locke was just drawing a comparison to zfs, not saying he wanted to use zfs with glusterfs
20:44 semiosis john_locke: a common setup is either using raw disks formatted with XFS, or using LVM over the disks and formatting a LV with XFS.  then mounting the XFS filesystem somewhere like /bricks/myvol0 and using that as a glusterfs brick
20:45 semiosis comparisons to RAID (or ZFS) do more harm than good for all the confusion it introduces
20:45 semiosis imho
20:45 semiosis glusterfs should be understood on its own terms, not in relation to RAID or whatever
21:06 theron joined #gluster
21:10 davemc joined #gluster
21:15 SOLDIERz_ joined #gluster
21:22 john_locke semiosis, I just came back, thanks. And is not necessary to use zfs, the problem that I have is that with zfs I do this: zpool create -o ashift=12 -f -m /ScannerOutputs ScannerOutputs pci-0000:05:00.0-scsi-0:0:0:0 pci-0000:06:00.0-scsi-0:0:0:0 pci-0000:08:00.0-scsi-0:0:0:0
21:23 john_locke snd zfs grabs those raid5 addresses and make one big pool, don't know how to point or use this for glusterfs
21:24 john_locke I point to dev/disk/by-path and there is 3 areca cards there. and that PCI:0000: is the "hook" to make my pool
21:24 semiosis glusterfs doesn't care what the underlying storage is, as long as it's a regular (POSIX) filesystem
21:25 semiosis you just tell glusterfs which directory to use, on which server
21:25 semiosis as i said before, most common is XFS
21:26 semiosis so for example, mkfs.xfs /dev/sdx1; mount /dev/sdx1 /bricks/myvol0; gluster volume create myvol server1:/bricks/myvol0
21:26 john_locke but I tried gluster volume create ScannerOutputs, OK going to format to xfs
21:27 semiosis the key is the mount
21:27 semiosis you need to give glusterfs a directory, which is normally a mounted filesystem
21:28 john_locke but I tried gluster volume create ScannerOutputs, OK going to format to xfs, also I tried mkfs.xfs as the gluster tutorial but gives me command not found, going to check what to install, I have centos 6.5
21:28 semiosis why zfs?
21:28 semiosis why are you using zfs at all?
21:29 john_locke that's the one that recommended one guy as reliable, so I put 3 servers with iscsi and samba
21:29 semiosis forgive my bluntness, but how do you know so much about zfs on centos but unclear on how to mount?
21:29 semiosis well that guy left you hanging
21:30 john_locke Don't worry, you are fine, I'm learning to make pools, and zfs was easy to deploy.
21:30 semiosis interesting
21:30 john_locke semiosis, and yes, the performance is bad
21:31 semiosis the package for mkfs.xfs may be xfsprogs
21:31 semiosis not sure what it is on centos
21:31 semiosis xfsprogs on debians
21:31 semiosis fwiw, i would just use lvm over raw disks, and format the LV with xfs
21:32 john_locke So I'm going to try to make this little pool with 3 servers, almost have petabyte and try using with nfs, without samba gives 800 mb/s each server wr
21:32 semiosis wow, ok, lots of storage
21:32 semiosis maybe raw disks & LVM isnt appropriate for that much
21:33 john_locke Yes they are a little big, some of them have 2tb harddrives, but others have 3tb and 6tb, so the owner of the company is trying to buy bigger HDD than more servers. What do you suggest?
21:34 JoeJulian pfft... lots of storage... ;)
21:34 semiosis JoeJulian: how many PB you got these days?
21:34 john_locke semiosis, I would like to make a big pool, right now I'm moving 150tb to other pool, so I can make this new glusterfs.
21:36 JoeJulian I think we're up to 5.7PB now.
21:36 semiosis !!!
21:37 john_locke mine would be only like 1.5 or something like that.
21:37 semiosis JoeJulian: largest single glusterfs volume?
21:38 JoeJulian shrinking quickly...
21:38 JoeJulian shhh....
21:38 semiosis hah ok
21:38 john_locke JoeJulian, how do you recommend to format? i was thinking every raid5 areca card to be 1 brick, 3 bricks per server
21:38 semiosis maybe davemc would like to do a case study on your setup for gluster.org :)
21:40 john_locke I contacted him, but he was busy doing something... I told him I have all the data need it, how is connected (we have 1 switch at 10gb, so all servers have 2 nics each one at 10gb
21:41 semiosis my last comment was for JoeJulian
21:41 john_locke ohh! sorry.
21:41 JoeJulian hehe
21:43 john_locke semiosis, So for a block of 60tb what do you recommend to format with?
21:45 semiosis xfs, i guess
21:45 JoeJulian fat32.
21:45 semiosis xfs is the most commonly used on disk filesystem for glusterfs
21:45 john_locke OK xfs it is...
21:45 john_locke semiosis, thanks!
21:46 semiosis yw, good luck!
21:46 JoeJulian Don't be a stranger.
21:53 john_locke JoeJulian, the block size mkfs.xfs -i size=512 or do you think is higher is better? the brick would be 60TB
21:53 semiosis default should be fine.
21:53 semiosis without any -i option
21:55 john_locke semiosis, OK thanks, omg! it worked, gonna continue with the other 2 cards, thanks!
21:56 semiosis yw
21:56 JoeJulian inode size has only to do with the size of all the extended attributes and whether you want them spilling into other inodes. Speed tests haven't shown any significant difference. With 60Tb, however, you /may/ want to mount with inode64.
21:58 john_locke thanks JoeJulian !
22:02 theron joined #gluster
22:10 Pupeno joined #gluster
22:12 Pupeno_ joined #gluster
22:12 theron joined #gluster
22:14 charta joined #gluster
22:20 Jamoflaw joined #gluster
22:32 badone joined #gluster
22:37 doo joined #gluster
22:49 edong23 joined #gluster
22:54 theron_ joined #gluster
22:55 Intensity joined #gluster
23:04 SOLDIERz_ joined #gluster
23:23 Pupeno joined #gluster
23:24 gildub joined #gluster
23:25 doo joined #gluster
23:29 msmith_ joined #gluster
23:51 Telsin john_locke: if you're using zfs, "zfs create v0/bricks", then create subdirs in that and use those for the "gluster vol create" bricks. or force it right on that dataset if you want
23:51 Telsin but with raid 5 in hardware, I'd probably use xfs directly on the raids and not mess with zfs
23:51 Telsin it has… drawbacks… if you don't need it's special features
23:52 Telsin (replace with your zpool name/zfs primary volume for v0/bricks, of course)
23:53 john_locke Telsin, thank you very much, I have completed the fist 3 bricks on each raid5 card, and made a volume. I have not tested nothing yet, have some problems when I tried to mount, but resolved because I did not know that I have to "start" the volume. and I did, then mount, and seems to work. Going to have so questions though.
23:54 john_locke I used the xfs
23:55 john_locke each brick is of 60tb, if you have any suggestions let me know, I can do it all over again, maybe a 6 HDD array instead of the 12 disks right now... More bricks more speed right?
23:56 Telsin yeah, always have to get used to something semantics the first time around
23:56 Telsin depends on how you're going to use it & how fast any given brick is hard

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary