Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 JoeJulian shapshot is used on bd volumes, which use lvm for the bricks.
00:13 JoeJulian So they're basically lvm snapshots, which have been very reliable and, more recently, actually usable. :)
00:25 shyam joined #gluster
00:40 dlambrig joined #gluster
00:53 Pupeno joined #gluster
01:18 epoch joined #gluster
01:27 zhangjn joined #gluster
01:45 zhangjn joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:49 zhangjn joined #gluster
01:55 vimal joined #gluster
02:17 tkiel joined #gluster
02:31 dlambrig joined #gluster
02:48 dgandhi joined #gluster
03:13 RobertLaptop joined #gluster
03:24 catern left #gluster
03:26 Lee1092 joined #gluster
03:42 RobertLaptop joined #gluster
03:46 nishanth joined #gluster
03:49 Pupeno joined #gluster
03:52 calavera joined #gluster
03:55 srsc joined #gluster
03:57 srsc setting up a new cluster...any thoughts on using hardware (Dell H700) vs software (mdadm) RAID on the bricks?
03:58 TheSeven joined #gluster
04:08 hchiramm_home joined #gluster
04:21 RedW joined #gluster
04:24 calavera joined #gluster
04:49 gem joined #gluster
04:50 haomaiwa_ joined #gluster
05:01 haomaiwa_ joined #gluster
05:07 skoduri joined #gluster
05:13 calavera joined #gluster
05:14 haomaiwa_ joined #gluster
05:21 yosafbridge joined #gluster
05:27 hchiramm_home joined #gluster
05:31 LebedevRI joined #gluster
05:39 DavidVargese joined #gluster
05:43 calavera joined #gluster
05:45 nangthang joined #gluster
05:45 srsc also...i'm looking to transfer data from an older cluter (3.4.1) to a newer cluster (3.7.4), but i can't seem to mount the older volume from the new cluster using the newer client. i'd rather not have to transfer everything over nfs. any ideas?
06:04 hgowtham joined #gluster
06:04 mash333 joined #gluster
06:06 srsc and one more. if i delete a volume with gluster volume delete <VOL>, i really have to go through each peer and run the setfattr -x commands and delete .glusterfs to reset them and allow for creation of a new volume? maybe that's by design, but that's really cumbersome. maybe there could be an option to reset bricks to allow for easy deletion/creation of new volumes.
06:13 srsc re: transferring between different versions of gluster, anyone tried running another version of gluster-client inside a docker container?
06:39 nangthang joined #gluster
06:45 thangnn_ joined #gluster
06:53 skoduri joined #gluster
07:02 vimal joined #gluster
07:24 haomaiwa_ joined #gluster
07:29 RameshN joined #gluster
07:33 nangthang joined #gluster
07:45 baojg joined #gluster
07:52 amitc__ joined #gluster
07:52 maveric_amitc_ joined #gluster
07:52 free_amitc_ joined #gluster
08:23 poornimag joined #gluster
08:23 RameshN joined #gluster
08:39 alghost joined #gluster
08:46 Pupeno joined #gluster
09:00 social joined #gluster
09:02 hgowtham joined #gluster
09:08 R0ok__ joined #gluster
09:15 alghost_ joined #gluster
09:17 alghost_ joined #gluster
09:22 Nebraskka JoeJulian, thanks for direction =) would be interesting to study this
09:23 Nebraskka woah, quite informative help output, cool
09:44 onorua joined #gluster
10:20 overclk joined #gluster
10:24 vimal joined #gluster
10:28 mhulsman joined #gluster
10:29 overclk joined #gluster
10:37 mhulsman joined #gluster
10:58 DV joined #gluster
11:23 onorua joined #gluster
12:23 abyss joined #gluster
12:26 mhulsman joined #gluster
12:27 natarej_ joined #gluster
12:44 haomaiwa_ joined #gluster
12:53 mhulsman joined #gluster
13:01 haomaiwa_ joined #gluster
13:14 ira joined #gluster
13:19 baojg joined #gluster
13:26 chirino joined #gluster
13:33 baojg joined #gluster
14:00 hchiramm_home joined #gluster
14:15 gem joined #gluster
14:16 _maserati joined #gluster
14:18 _maserati_ joined #gluster
14:29 mhulsman joined #gluster
14:29 alghost_ joined #gluster
14:47 lkoranda_ joined #gluster
15:18 _maserati_ joined #gluster
15:31 chirino joined #gluster
15:34 hgichon0 joined #gluster
15:35 hgichon0 ping
15:35 glusterbot hgichon0: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:36 hgichon0 i am testing gluster-nfs... just now i found strange nfs inode usage...
15:36 hgichon0 10gnode2-1:/thin        9105330880 2074631 9103256249    1% /mnt/n1 10gnode2-2:/thin        9105330880 2074631 9103256249    1% /mnt/n2 10gnode2-3:/thin        9105330880 2074631 9103256249    1% /mnt/n3 10gnode2-4:/thin        9105330880 2074631 9103256249    1% /mnt/n4 10gnode2-5:/thin        9105330880 2074631 9103256249    1% /mnt/n5 10gnode2-6:/thin        9105330880 2074631 9103256249    1% /mnt/n6 10gnode2-8:/thin        9
15:37 hgichon0 8node = 4 X 2
15:37 hgichon0 client is centos7
15:38 hgichon0 10gnode2-7:/thin        46377304064 295399424 46081904640   1% /mnt/n7
15:38 hgichon0 10gnode2-8:/thin        45518307328 290889728 45227417600   1% /mnt/n8
15:40 hgichon0 only 10gnode2-7 nfs mount dir(/mnt/n7) is not same inode usage
15:42 hgichon0 With fuse mount, inode usage (290889728) also same with /mnt/n8
15:43 hgichon0 Hum,, sorry for poor my english.
16:16 hgichon0 joined #gluster
17:03 chirino joined #gluster
17:09 natarej joined #gluster
17:59 ir8 joined #gluster
18:00 ir8 two create a three node (replicated) enviroment I would only need three servers to mirror each other correct?
18:17 dlambrig joined #gluster
18:26 mhulsman joined #gluster
18:37 dlambrig joined #gluster
19:31 Schatzi joined #gluster
19:31 Schatzi hi @all
20:15 ir8 Schatzi: Have time for a few questions.
20:49 shyam joined #gluster
20:58 srsc FWIW, i was able to successfully run an older version of glusterfs-client inside a docker container, passing in the mounted newer gluster volume to the container to allow copying from the old volume to the new volume while they were both mounted with different versions of glusterfs-client
20:58 cuqa_ joined #gluster
20:59 srsc docker run --privileged -v /dev/infiniband:/dev/infiniband -v /mnt/newgluster:/mnt/newgluster -i -t debian /bin/bash
20:59 srsc and install older client version inside of that container, mount old volume, rsync from mounted oldvolume to /mnt/newgluster
21:00 srsc obviously the /dev/infiniband bit can be ignored if using vanilla tcp
21:09 Pupeno joined #gluster
21:11 DV joined #gluster
21:17 shyam joined #gluster
21:17 Pupeno joined #gluster
21:34 Mr_Psmith joined #gluster
21:49 Mr_Psmith I am having a hard time understanding the difference between “Striped” and “Distributed Striped”, after reading the docs @ http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
21:49 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.org)
21:50 Mr_Psmith In fact, the accompanying illustrations even look as if they could be backward based on the descriptions :?
22:01 dgbaley joined #gluster
22:21 DV joined #gluster
23:26 Pupeno joined #gluster
23:31 Akee joined #gluster
23:37 Mr_Psmith Does anyone here used hardware RAID? I got the impression from reading initially that no RAID should be used and each brick should be a disk, but when I read the RedHat Storage  guidelines it recommends hardware RAID and I get the idea that implies 1 brick/node

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary