Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:48 aliguori joined #gluster
01:02 blendedbychris joined #gluster
01:02 blendedbychris joined #gluster
01:56 lh joined #gluster
01:56 lh joined #gluster
02:02 sunus joined #gluster
02:02 sunus left #gluster
02:03 sunus joined #gluster
02:19 mohankumar joined #gluster
02:26 ika2810 joined #gluster
02:52 lng joined #gluster
03:00 lng Hi! My volumes are marked with "N" for "Online" of "Process" when I execute `gluster volume status`. Does it mean they are offline?  http://pastie.org/private/2crawg16xxf3bnnpw53eg
03:00 glusterbot Title: Private Paste - Pastie (at pastie.org)
03:00 lng I can see data is coming...
03:05 lng Can anybody help please?
03:06 lng how to get bricks online?
03:11 bharata joined #gluster
03:20 bala joined #gluster
03:31 berend` joined #gluster
03:34 shylesh joined #gluster
03:38 lng JoeJulian: Hello! Are you here?
03:41 berend`` joined #gluster
03:48 berend``` joined #gluster
03:57 sunus i create a volume, it says name or a prefix of it is already part of a volume? why?
03:57 glusterbot sunus: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
04:12 sripathi joined #gluster
04:33 sunus i did: volume create dist_vol 192.168.10.2:/gfs/dist_brk_vm1 192.168.10.3:/gfs/dist_brk_vm2 192.168.10.4:/gfs/dist_brk_vm3 , which went success. but after that , i did volume create strp_vol stripe3 192.168.10.2:/gfs/strp_brk_vm1 192.168.10.3:/gfs/strp_brk_vm2 192.168.10.4:/gfs/strp_brk_vm3   failed, says brick may be containing or be contained by an existing brick, why?
04:35 sunus i thought /gfs/dist_brk_vm* are bricks and /gfs/strp_brk_vm* are another bricks, why says it containing/contained ?
04:37 JZ_ joined #gluster
04:41 vpshastry joined #gluster
04:42 hagarth joined #gluster
04:45 lng irc.gnu.org#gluster Gluster IRC Channel
04:45 lng oops
04:47 lng joined #gluster
04:48 lng joined #gluster
04:50 Humble joined #gluster
04:50 lng joined #gluster
04:52 lng joined #gluster
04:55 deepakcs joined #gluster
04:58 hyt joined #gluster
04:58 mohankumar joined #gluster
05:02 sripathi1 joined #gluster
05:02 sripathi1 joined #gluster
05:06 JoeJulian lng: N means that glusterfsd for that brick isn't running. Do "gluster volume start $volname force" on that server to make sure it's started. Check your log files to find out why it's not running.
05:07 avati_ joined #gluster
05:10 lng JoeJulian: oh ok
05:11 lng thanks for the hint!!!
05:11 JoeJulian You're welcome. Goodnight. :D
05:11 lng oh :)
05:11 lng it is afternoon here
05:12 JoeJulian GMT-7 here, so it's 10pm for me.
05:12 lng 10pm is so early :-)
05:12 blendedbychris joined #gluster
05:14 blendedbychris joined #gluster
05:14 lng JoeJulian: after `gluster volume start storage force` it looks the same
05:15 lng JoeJulian: what should I do with that?
05:15 lng maybe I can reattach it somehow?
05:15 lng it shown the same on 4 nodes: Starting volume storage has been successful
05:17 JoeJulian Gah, I hate when logs are paraphrased. I can't find that error in the source.
05:19 lng [2012-10-25 03:18:48.820051] E [xlator.c:385:xlator_init] 0-storage-server: Initialization of volume 'storage-server' failed, review your volfile again
05:19 lng [2012-10-25 03:18:48.820051] E [xlator.c:385:xlator_init] 0-storage-server: Initialization of volume 'storage-server' failed, review your volfile again
05:20 lng [2012-10-25 03:18:48.820065] E [graph.c:294:glusterfs_graph_init] 0-storage-server: initializing translator failed
05:20 lng [2012-10-25 03:18:48.820079] E [graph.c:483:glusterfs_graph_activate] 0-graph: init failed
05:20 JoeJulian fpaste
05:20 lng sure
05:21 lng http://paste.ubuntu.com/1304208/
05:21 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
05:21 lng grep E
05:23 JoeJulian port is already in use suggests that glusterfsd is already running.
05:23 sgowda joined #gluster
05:23 JoeJulian ps ax | grep glusterfsd
05:24 vpshastry left #gluster
05:24 lng http://paste.ubuntu.com/1304212/
05:24 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
05:24 JoeJulian This is weird. That cleaning error you posted, someone else posted once in 3.2.5. 3.2.5 doesn't have that error message anywhere in it.
05:24 bulde1 joined #gluster
05:25 lng JoeJulian: what would you suggest me to do to fix it asap?
05:25 lng add new nodes and copy data to it?
05:25 lng or somehow reattach the bricks?
05:26 JoeJulian killall glusterfsd ; killall -9 glusterfsd ; killall glusterd ; glusterd
05:26 vpshastry joined #gluster
05:26 JoeJulian in theory
05:26 lng and then?
05:27 lng how about  /etc/init.d/glusterd ?
05:27 JoeJulian Then the bricks should be restarted. Gimme another ps  to look at.
05:27 lng which one?
05:27 JoeJulian "/etc/init.d/glusterd start" would work in place of that last glusterd if you'd rather.
05:28 lng ok
05:28 lng what should I post?
05:28 JoeJulian ^R ps ax
05:28 lng I don't understand why cpu and network graphs are the same for all the nodes... and volumes size change also
05:29 hagarth joined #gluster
05:29 lng `ps ax` - without grep?
05:29 JoeJulian Hard to say. You've kinda been following your own advice thus far so you're really the only one who can guess what the state of your services and volumes is.
05:30 JoeJulian ps ax | grep glusterfsd
05:30 lng http://paste.ubuntu.com/1304217/
05:30 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
05:30 lng the whole thing
05:30 JoeJulian that'll work
05:30 JoeJulian Well, apparently your sed didn't take: volfile-id storage.10.132.65.32.storage-4b
05:30 lng omg - apache2????
05:32 lng I can see /var/lib/glusterd/vols/storage/​run/10.132.65.32-storage-2b.pid
05:32 lng `grep -r 10.132.65.32 /var/lib/glusterd/` - nothing is returned
05:32 JoeJulian And that 10.132... isn't a hostname and it should be if your change took effect.
05:32 JoeJulian :/
05:32 lng JoeJulian: strange
05:33 sunus JoeJulian: hi, are you there?
05:33 JoeJulian barely
05:33 lng JoeJulian: I cannot see any IP in /var/lib/glusterd/
05:33 sunus JoeJulian: i did: volume create dist_vol 192.168.10.2:/gfs/dist_brk_vm1 192.168.10.3:/gfs/dist_brk_vm2 192.168.10.4:/gfs/dist_brk_vm3 , which went success. but after that , i did volume create strp_vol stripe3 192.168.10.2:/gfs/strp_brk_vm1 192.168.10.3:/gfs/strp_brk_vm2 192.168.10.4:/gfs/strp_brk_vm3   failed, says brick may be containing or be contained by an existing brick, why?
05:34 lng JoeJulian: I'm trying your kill command
05:35 lng JoeJulian: No more IPs!
05:35 JoeJulian lng: When you ask what to do, then you respond with the results you leave the person helping you with the expectation that you've performed the command. Please if you ask for advice and make me go to all the trouble of typing it out, at least try it.
05:36 raghu joined #gluster
05:36 lng JoeJulian: of course I do!!!
05:36 lng and thank you so much for helping me!
05:37 JoeJulian sunus: Ah, I think I see it. I read those lines 5 times before I noticed. "stripe3" instead of "stripe 3"
05:37 JoeJulian lng: You're welcome.
05:37 JoeJulian Does status show properly now?
05:38 lng it looks better now!
05:38 JoeJulian excellent
05:39 lng JoeJulian!!!
05:39 lng THANKS!!!!! YOU FIXED THAT!
05:39 lng Now I have all Y
05:39 deepakcs joined #gluster
05:39 sunus JoeJulian: sorry, my bad. buy i did use "stripe 3" not stripe3
05:40 lng does it mean `/etc/init.d/glusterd restart` is not enough?
05:40 anti_user joined #gluster
05:40 anti_user hello
05:40 JoeJulian lng: No, that just restarts glusterd
05:40 glusterbot anti_user: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:40 JoeJulian @processes
05:40 glusterbot JoeJulian: the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
05:41 anti_user im using gluster at 3 days, and want to know few things))
05:41 lng I see
05:42 JoeJulian anti_user: It's okay to date a nun as long as you don't get into the habit.
05:42 lng JoeJulian: but why CPU was the same for all?
05:42 JoeJulian lng: Because it was probably working.
05:43 lng :-)
05:43 anti_user i have 4 servers with gluster, and if one server will be rebooted or shutdown at 10 minutes due user copy data on this massive, this file will be placed on this machine after reboot?
05:43 bulde1 joined #gluster
05:43 JoeJulian sunus: And dist_vol actually works I presume.
05:43 sunus JoeJulian: volume info says Brick1: 192.168.10.2:/gfs/dist_brk_vm1, Brick2: 192.168.10.2:/gfs/dist_brk_vm2...    why the is containing/contained relationship involved when i trying to volume create strip_vol stripe 2 192.168.10.2:/gfs/strp_brk_vm1, 192.168.10.2:/gfs/strp_brk_vm2
05:43 lng JoeJulian: is there a way to create new brick with some data already on it?
05:44 sunus JoeJulian: yeah, i think it works
05:44 hateya joined #gluster
05:45 JoeJulian anti_user: If I'm understanding your question correctly, yes. Any changes made to the bricks that are still operating will replicate to a brick that's not once that brick returns. This is an automated process with 3.3, or you have to do a ,,(repair) process with earlier versions.
05:45 glusterbot anti_user: http://www.gluster.org/community/do​cumentation/index.php/Gluster_3.1:_​Triggering_Self-Heal_on_Replicate
05:46 anti_user p.s. im using git version of gluster
05:46 JoeJulian ~pasteinfo | sunus
05:46 glusterbot sunus: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
05:46 JoeJulian anti_user: Then it'll be automatic (and you've lost your marbles ;) )
05:48 JoeJulian lng: yes. In a distribute volume, there's very few caveats. In a replicated volume, the left-hand replica may be pre-loaded.
05:48 anti_user oh thanks! will be nice to play with it
05:48 lng JoeJulian: how to achieve it?
05:48 anti_user and last question)))
05:48 JoeJulian anti_user: That is, of course, if you're using replicated volumes. :)
05:49 lng how about .glusterfs/ directory?
05:49 JoeJulian lng: When you're creating a replicated volume, you list bricks in pairs. The left-hand brick of that pair may be preloaded.
05:50 lng that's great
05:50 JoeJulian The .glusterfs directory will be created as part of the self-heal process.
05:50 JoeJulian anti_user: I hope that's not true. I would feel an overwhelming sense of responsibility if I were to be asked your very last question ever.
05:50 anti_user im working in ISP and we develop big share disk, we have much users in /etc/passwd and user quotas, how can i use userquota with gluster?
05:51 morse joined #gluster
05:51 lng so I can cretae snapshot of existing EBS volume that has data, then create new EBS volume from this snapshot to use in Gluster!
05:51 lng JoeJulian: nice!
05:52 JoeJulian anti_user: http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf  Chapter 9
05:52 lng JoeJulian: would you recomend to rebalance my storage after all?
05:53 * JoeJulian shrugs
05:53 anti_user oh thanks for this guide!
05:53 sunus JoeJulian: okay, wait a sec!!
05:53 JoeJulian it already exists doesn't it.
05:57 Humble joined #gluster
05:57 lng JoeJulian: how to initiate self-heal?
05:57 JoeJulian wait around
05:57 JoeJulian or gluster volume heal $volume
05:58 guigui3 joined #gluster
05:58 lng JoeJulian: thank you
05:58 sunus JoeJulian: http://fpaste.org/3OBY/   here u go
05:58 glusterbot Title: Viewing dear JoeJulian help plz:) by sunus (at fpaste.org)
05:59 JoeJulian Ah, that's a different story. not available
06:00 JoeJulian sunus: on .3 make sure /gfs is mounted and rm -rf /gfs/strp_brk_vm2
06:01 JoeJulian sunus: I assume you're creating a stripe volume just for testing so you don't need to read ,,(stripe) but there it is anyway, just in case.
06:01 glusterbot sunus: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
06:02 sripathi joined #gluster
06:04 sunus JoeJulian: thanks, i will give it a try now
06:04 JoeJulian Alright guys, good luck. I'm going to go remind my wife that she's married now. :)
06:05 sunus JoeJulian: -.- sorry i don't have /gfs/strp_brk_vm2
06:07 lng JoeJulian: `gluster volume heal storage info` shows the same files all the time. Is it ok?
06:07 sunus lng: it looks like he's gone for his wife:)
06:08 lng ah, ok :-)
06:12 sunus lng: hi you know what's going here? http://fpaste.org/3OBY/
06:12 glusterbot Title: Viewing dear JoeJulian help plz:) by sunus (at fpaste.org)
06:12 madphoenix joined #gluster
06:12 sunus lng: i really don't get why there's containing/contained..
06:16 overclk joined #gluster
06:21 lng ---------T 1 root root 0 Oct 23 10:51 /storage/150000/152000/152600/152627/game.dat
06:21 lng what does T mean?
06:21 badone joined #gluster
06:26 hyt hi, I am wondering is there any way I can disable gluster core dump, on centos 5.5, I tried ulimit, /etc/security/limits.conf, but with no luck.
06:31 lng I think it's sticky bit...
06:33 sunus joined #gluster
06:34 hagarth joined #gluster
06:37 lng ls: cannot access /storage/2130000/2133000/2133600/2133652/game.dat: Input/output error
06:37 lng ??????????  ? ?        ?            ?            ? game.dat
06:37 lng what does it mean?
06:39 bulde1 EIO? do you see split-brain logs?
06:39 sunus brick may be containing or be contained by an existing brick, why?
06:40 ngoswami joined #gluster
06:41 lkoranda joined #gluster
06:42 lng bulde1: how to check split-brain logs?
06:43 puebele joined #gluster
06:45 lkoranda joined #gluster
06:53 bulde1 lng: check the mount logs (/var/log/glusterfs/mount-point-name.log
06:53 lng ok
06:53 lng how to enable Quorum Enforcement?
06:53 ika2810 joined #gluster
06:54 sripathi joined #gluster
06:55 lng bulde1: http://pastie.org/private/bnlaley9bnapsiqoe1f1q
06:55 glusterbot Title: Private Paste - Pastie (at pastie.org)
06:56 lng bulde1: I have some message in /var/log/glusterfs/glustershd.log
06:56 deepakcs joined #gluster
06:57 lng bulde1: is there any solution to this problem?
06:58 lng /var/log/glusterfs/glustershd.log.1:[2012-10-25 06:49:12.966618] W [afr-self-heal-data.c:831:afr_look​up_select_read_child_by_txn_type] 0-storage-replicate-1: <gfid:bac3284e-e395-44b2-92ab-19880cd1a13d>: Possible split-brain
06:58 bulde1 lng: you using nfs client?
06:59 lng bulde1: no, native gluster one
07:00 bulde1 then check the log file in client machine
07:00 bulde1 not server
07:00 lng I see
07:01 lng moment
07:02 puebele joined #gluster
07:02 Tarok joined #gluster
07:03 lng bulde1: http://pastie.org/private/eiosaidtn6ticagabfdn7q
07:03 glusterbot Title: Private Paste - Pastie (at pastie.org)
07:03 lng from client
07:04 ctria joined #gluster
07:07 lng bulde1: any recommendation?
07:08 bulde1 lng: not master of those logs... can you mail in usermailing list? or open a bug report
07:09 lng sure
07:10 anti_user hmm, when i trying to enable volume quota i receive # gluster volume quota datastore enable Quota command failed
07:12 sripathi joined #gluster
07:14 tjikkun_work joined #gluster
07:16 puebele2 joined #gluster
07:18 bulde1 anti_user: can you check glusterd logs?
07:19 anti_user where i can see log? on master machine?
07:19 anti_user or peers
07:19 bulde1 where the command was entered (one of the pool)
07:20 bulde1 /var/log/glusterfs/*glusterd.vol*
07:20 anti_user ok, one minute to copy/paste
07:20 dobber joined #gluster
07:23 anti_user etc-glusterfs-glusterd.vol.log its right log?
07:24 ramkrsna joined #gluster
07:25 Tarok joined #gluster
07:26 andreask joined #gluster
07:31 kd joined #gluster
07:32 bulde1 anti_user: yes
07:33 anti_user there is nothing about quota
07:33 bulde1 k, can you check 'cli.log'?
07:35 anti_user http://pastebin.com/raw.php?i=uhm2X5g6
07:35 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
07:35 anti_user @paste
07:35 glusterbot anti_user: For RPM based distros you can yum install fpaste, for debian and ubuntu it's dpaste. Then you can easily pipe command output to [fd] paste and it'll give you an url.
07:36 anti_user http://dpaste.org/fZzdt/
07:36 glusterbot Title: dpaste.de: Snippet #211760 (at dpaste.org)
07:38 anti_user hmmm, i trying 5 times to activate quotas and on 6 time command will successfull with # gluster volume quota datastore enableQuota is already enabled
07:40 anti_user and i dont change configuration, what happend?)))
07:42 anti_user okay, another small problem from git version - i cannot user K, M, G, T to set quotas
07:43 Nr18 joined #gluster
07:43 anti_user # gluster volume quota datastore limit-usage /datastore/user2 100M
07:43 anti_user and receive Please enter a correct value
07:44 anti_user if i use 100 nor 100M it get OK
07:50 anti_user haa, it understand MB,GB,TB nor M,G,T))))
07:58 anti_user okay, another problem - its not displaying using quota size, and i can write file over limit quota
07:59 anti_user output http://dpaste.org/sphLC/
07:59 glusterbot Title: dpaste.de: Snippet #211763 (at dpaste.org)
08:05 guigui4 joined #gluster
08:06 Azrael808 joined #gluster
08:07 TheHaven joined #gluster
08:10 kshlm anti_user: are you writing into a directory called /datastore/user2 on the mount point (ie. <mnt-point>/datastore/user2) ?
08:10 anti_user yes!
08:10 anti_user one second i send you info about volume
08:12 anti_user Brick1: gluster-node-01:/tesing
08:12 anti_user Brick2: gluster-node-02:/testing
08:12 anti_user Brick3: gluster-node-03:/testing
08:12 anti_user Brick4: gluster-node-04:/testing
08:12 anti_user Options Reconfigured:
08:12 anti_user diagnostics.count-fop-hits: on
08:12 anti_user diagnostics.latency-measurement: on
08:12 anti_user features.limit-usage: /datastore/user2:10MB
08:12 anti_user features.quota: on
08:13 anti_user mount gluster-node-01:datastore on /datastore type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
08:14 kshlm so you are writing into /datastore/datastore/user2? then if the quota is not being enforced, it's a bug.
08:16 foster joined #gluster
08:16 anti_user i writing to /datastore/user2
08:20 kshlm anti_user: the paths given for quota treat the mount point as the root. this means once a volume is mounted, the quota is applied on the directory <mount-point>/<quota-path>
08:20 kshlm in your case it is /datastore/datastore/user2.
08:20 anti_user okay, thanks! i trying it
08:21 kshlm cool. :)
08:22 anti_user its work! now i understand))))
08:22 anti_user sorry for my mistake
08:44 Triade joined #gluster
08:49 64MAB45FH joined #gluster
08:54 duerF joined #gluster
08:58 sunus can someone please explain to me what's the relationship between volume and brick(subvolume) ? is brick and subvolume same thing?
08:58 clag_ hi all, big pb here with glusterfs 3.2. 3 bricks replicate. Many small files/cache from web application. After a maintenance shutdown of a bricks there is many LOOKUP now and load average is too high. Can we stop self health ?
08:58 sunus i'll be waiting or you can just pm me:)
08:59 clag_ i mean ;: cluster.entry-self-heal, cluster.data-self-heal, cluster.metadata-self-heal
08:59 clag_ don't exatly what params each do
09:02 pkoro joined #gluster
09:08 kd joined #gluster
09:08 tjikkun_work_ joined #gluster
09:28 bulde1 joined #gluster
09:56 hagarth joined #gluster
10:04 sripathi joined #gluster
10:07 NuxRo guys, when I mount a glusterfs volume via NFS from a windows 7 I get "permission denied" for any write operation. Any gotchas I need to be aware of?
10:09 lng NuxRo: yes, don't use windows
10:09 NuxRo lng: touche
10:10 NuxRo but I do need some help nevertheless, so any other ideas? :)
10:13 anti_user where i can get web console gluter administration?
10:17 anti_user *gluster
10:18 manik joined #gluster
10:33 ika2810 left #gluster
10:42 TheHaven joined #gluster
10:43 sripathi joined #gluster
10:47 guigui1 joined #gluster
11:07 sripathi1 joined #gluster
11:14 hagarth joined #gluster
11:21 puebele joined #gluster
11:22 Daxxial_ joined #gluster
11:25 kkeithley1 joined #gluster
11:29 johnmark anti_user: ovirt.org
11:30 anti_user much thanks! will try it
11:34 johnmark anti_user: sure
11:39 tryggvil joined #gluster
11:41 tryggvil joined #gluster
11:49 sunus hi, can anyone explain a little bit about volume and and brick to me? is subvolume and brick the same thing?
11:49 sripathi joined #gluster
11:54 TheHaven joined #gluster
11:58 hagarth1 joined #gluster
12:04 bulde1 sunus: what is subvol? 'brick' is the target storage pool (a posix xlator definition), and volume is pool of bricks
12:06 _Bryan_ sunus: Bricks are local physical directories or Partitions on a server.  A Volume is a collection of multiple bricks.
12:07 sunus bulde1: i saw some old documents, there's subvol in .vol files and it has name brick_xxx
12:09 sunus _Bryan_:  is there some limition or what? i
12:10 sunus bulde1: _Bryan_: i got this http://fpaste.org/3OBY/ and still could not figure out why
12:10 glusterbot Title: Fedora Pastebin - by Fedora Unity (at fpaste.org)
12:11 bulde2 joined #gluster
12:11 sunus when i delele the volume i create above, why i can create strp volumes.. i am really confused
12:11 _Bryan_ I thin kyou have extended attributes on the one brick...I woul dguess you had it in another voulme and then deleted it..and then created a new one
12:13 sunus _Bryan_:  yeah, then? first i clean the exsiting files then create new one
12:13 _Bryan_ did you delete and recreate tthe directory that you defined as the brick...or only the files in the directory you defined as a brick
12:14 sunus _Bryan_:  i just rm -rf the files
12:14 _Bryan_ remove the directory and then recreate it...
12:14 _Bryan_ that will get rid fo the extra attributes..
12:14 sunus ok, i will do that now
12:14 sunus _Bryan_:  could you please wait a sec, i need to do that in  my qemu's
12:15 _Bryan_ if that is a partition you may have to unmount it first...
12:15 _Bryan_ yeah I will be here for about 10 more mins..then I have to head into the office
12:15 sunus _Bryan_:  umount ? why ?
12:16 _Bryan_ if you have moutned to the directory...on the local server you wont be able to remove the dir while it is moutned
12:16 sunus _Bryan_:  you mean umount the partition where glusterfs gonna you?
12:16 _Bryan_ if you are working with a directory within the mount ehn you are good and you just need to remove and recreate the dir
12:17 bulde1 joined #gluster
12:17 sunus _Bryan_:  i'll just mkdir a new dir, is that ok?
12:17 kkeithley1 sunus: you don't need to unmount it -- you can just clear all the xattrs.
12:17 sunus it's that ok if i just mkdir a new dir?
12:17 _Bryan_ kkeithley1: was trying to keep it simple....
12:17 kkeithley_wfh yes, you can do that too
12:17 _Bryan_ 8-)
12:17 kkeithley_wfh _Bryan_: I know
12:18 kkeithley_wfh I too like to use a subdir on a full disk brick so that I can just rm -rf the dir
12:19 _Bryan_ JoeJulain switch me over to doing that..I used to mount the local raid array and then set that moujnt point as the brick....not I go one sub-dir lower
12:19 kkeithley_wfh because I'm always creating and tearing down volumes
12:19 _Bryan_ yeah....my test setup is just scary at times... 8-)
12:20 sunus ok now i delete all the old stuffs
12:20 _Bryan_ ok
12:21 sunus i will create a dist vol with 3 bricks in 3 qemus
12:22 sunus volume create dist_vol 192.168.10.2:/gfdata/dist_brk1 192.168.10.3:/gfdata/dist_brk2 192.168.10.4:/gfdata/dist_brk3   is that ok?
12:23 sunus _Bryan_:  hum?
12:23 _Bryan_ I think you need transport tcp in there
12:23 sunus _Bryan_:  it's necessary?
12:24 _Bryan_ gluster volume craete dist_vol transport tcp 192.168.10.2:/gfdata/dist_brk1 192.168.10.3:/gfdata/dist_brk2 192.168.10.4:/gfdata/dist_brk3
12:24 _Bryan_ I always do it...
12:24 _Bryan_ I dont use any rdma....so I always specify
12:24 sunus _Bryan_:  ok, i think it's default to tcp, according to the doc
12:25 _Bryan_ possibly....I have been with gluster for a few versions....so old habits die hard
12:25 _Bryan_ please try ....I will have to bail in a couple mins...
12:25 sunus _Bryan_: ok  that wen succeed
12:25 _Bryan_ 8-)  Glad to hear...
12:26 _Bryan_ I think your problem is that while you are testing and playing with the configuration you had old brick informatio on the directories....
12:26 _Bryan_ this is there to prevent you from automatically overwriting the data if you try to add a brick to more than one volume
12:26 _Bryan_ you just need to start the volume and mount it now...
12:27 _Bryan_ good luck....I am out of here...but there are may that can help you if you continue to have any other issues...
12:27 sunus _Bryan_:  ok. thank you! i will just try to create strip volume, it that didn't succeed, i will look into the configure file
12:28 sunus _Bryan_:  where is the configuration placed? i compiled glusterfs from src
12:31 balunasj joined #gluster
12:55 ramkrsna joined #gluster
12:55 ramkrsna joined #gluster
13:00 guigui1 joined #gluster
13:03 sunus hi , i post a question at: http://community.gluster.org/q/can-not-cr​eate-new-volume-after-created-one-volume/   anyone would be interested to help me out? thank you!!
13:03 glusterbot Title: Question: can not create new volume after created one volume. (at community.gluster.org)
13:06 manik joined #gluster
13:29 aliguori joined #gluster
13:31 Humble joined #gluster
13:32 spn joined #gluster
13:33 Nr18 joined #gluster
13:37 clag_ hello, once two bricks detached from a cluster, can they been added to a newer gluster volume with existing data ?
13:37 clag_ or volume creation need empty directory ?
13:43 Staples84 joined #gluster
13:43 rwheeler joined #gluster
13:45 guillaume__ joined #gluster
13:51 Nr18 joined #gluster
13:55 hagarth joined #gluster
13:57 tripoux joined #gluster
14:00 guillaume__ joined #gluster
14:02 stopbit joined #gluster
14:02 TheHaven joined #gluster
14:07 cbehm joined #gluster
14:13 sripathi joined #gluster
14:20 manik joined #gluster
14:21 cbehm We're running gluster 3.3 (not 3.3.1 at this time) and we've noticed an unusual behavior related to bandwidth. After about 6 hours, the bandwidth usage jumps up by 2-3x. Remounting from the client side (FUSE) puts the bandwidth usage back to the starting levels. Has anyone else seen that behavior?
14:34 aliguori joined #gluster
14:42 dobber cbehm: i've noticed that if i remount my traffic is low for about 6 hours, then the bandwidth rises again
14:42 cbehm dobber: at least I'm not imagining it :)
14:43 dobber and it's pretty high too
14:43 dobber 3-4 times higher
14:44 dobber http://cl.ly/image/0h2N1g2o2801
14:44 glusterbot Title: Screen Shot 2012-10-25 at 5.44.15 PM.png (at cl.ly)
14:46 cbehm dobber: yup, looks like we see the same kind of bandwidth utilization change
14:47 Technicool joined #gluster
14:48 wushudoin joined #gluster
14:48 dobber http://cl.ly/image/052V1U1u3l3O
14:49 glusterbot Title: Screen Shot 2012-10-25 at 5.48.49 PM.png (at cl.ly)
14:49 dobber here is the second server
14:51 kd left #gluster
15:04 Daxxial_1 joined #gluster
15:11 daMaestro joined #gluster
15:21 puebele joined #gluster
15:23 tryggvil_ joined #gluster
15:29 plarsen joined #gluster
15:32 semiosis :O
15:34 semiosis JoeJulian: portreserve -- good tip, that's a new one foe me
15:34 semiosis s/foe/for/
15:34 glusterbot What semiosis meant to say was: JoeJulian: portreserve -- good tip, that's a new one for me
15:40 blubberdi joined #gluster
15:41 puebele1 joined #gluster
15:41 blubberdi Hi, can someone please tell me which option I can use with ubuntu 12.04 to mount glusterfs after network is up. With `_netdev` I get: "unknown option _netdev (ignored)"
15:42 semiosis blubberdi: are you trying to mount from localhost at boot time?
15:42 blubberdi semiosis: yes
15:43 blubberdi "localhost:/foo /var/foo glusterfs defaults 0 0"
15:43 semiosis and what version of glusterfs is this?  where did you get the package?
15:43 blubberdi I use "3.3.0-ppa1~precise3" from your ppa
15:47 semiosis blubberdi: are you just starting out with glusterfs?
15:49 blubberdi semiosis: Yes, I've read the administration guide, setup glusterfs removed a peer/brick, added one. Now I've tried to reboot but it stops after it couldn't mount the glusterfs.
15:49 semiosis i suggest you start over with ,,(ppa)
15:49 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.3
15:50 semiosis that's got the newest release 3.3.1, it has the fix for the mount from localhost at boot, and it follows the standard packaging structure as used in debian/ubuntu repos
15:52 blubberdi I have some data in the glusterfs. Can I upgrade or do I have create it new?
16:06 chouchins joined #gluster
16:09 blubberdi I just start new. semiosis thank you very much for your help
16:09 semiosis d'oh i just got off a call
16:11 tryggvil joined #gluster
16:11 semiosis @later tell blubberdi sorry i had to take a phone call.  i hope you were successful getting the mount working.
16:11 glusterbot semiosis: The operation succeeded.
16:11 UnixDev is it safe to use gluster with btrfs as the underlying filesystem?
16:11 semiosis UnixDev: i'd like to know the answer to that as well :)
16:12 UnixDev lol
16:12 * semiosis just switched from ext4 to xfs
16:12 UnixDev id like to know how I ended up split brain too… I think it -could- be that nfs was accessed before nodes had a change to reconnect, bit it would be millisecond timing only
16:13 UnixDev semiosis: why? any problems i should know about? I'm running ext4 now
16:13 semiosis UnixDev: which distro?
16:13 UnixDev centos 6
16:13 semiosis yeah you should know about ,,(ext4)
16:13 glusterbot Read about the ext4 problem at http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
16:13 semiosis :(
16:13 mrobbert joined #gluster
16:14 cbehm i'm looking forward to btrfs coming out of experimental, before that i'd say "gluster on btrfs is only as safe as btrfs" - so personally i wouldn't be using it for production
16:14 semiosis generally xfs is the recommended filesystem for glusterfs, and use -i size=512 when formatting
16:14 jdarcy There's a new ext4 problem BTW.  http://linux.slashdot.org/story/12/10/24/18482​03/ext4-data-corruption-bug-hits-linux-kernel
16:14 glusterbot Title: EXT4 Data Corruption Bug Hits Linux Kernel - Slashdot (at linux.slashdot.org)
16:16 wushudoin joined #gluster
16:16 jdarcy Basically, upstream ext4 isn't something I consider reliable.  I use XFS, or at least wait until Eric Sandeen (local-FS lead at Red Hat) and crew have done their usual excellent job.
16:17 cbehm jdarcy: that's an awfully nasty ext4 bug, ouch
16:19 mrobbert Can anybody here explain why having a large stack size limit (ulimit -s) causes the glusterfs client to fail to start due to memory allocation?
16:19 jdarcy How large are we talking about?
16:20 mrobbert 2097152
16:20 sunus joined #gluster
16:20 mrobbert I think that is 2GB.
16:20 UnixDev jdarcy: what do you think about btrfs?
16:24 guigui3 joined #gluster
16:25 jdarcy UnixDev: Um, try not to?  ;)
16:25 UnixDev lol… how about zfs on linux? looking for something with dedupe
16:26 jdarcy UnixDev: But seriously, it embodies some cool ideas.  I think the world needs something like it, even though the COW approach has lost some of its lustre.
16:26 * jdarcy is more of a log-based kinda guy, which is almost the opposite of COW.
16:27 jdarcy UnixDev: If dedup is a requirement then I'd say btrfs is a better long-term bet than ZFS on Linux, because it doesn't have the hatred of the senior kernel hackers to deal with.
16:27 semiosis UnixDev: zfs on linux has had some issues in the past with xattrs not supported on certain types of files (symlinks, devices, etc) iirc -- but those may have been resolved by now
16:28 jdarcy Whether it's a better bet *today* is a bit harder question, which I don't feel qualified to answer.
16:28 semiosis people have used it, and glusterfs did work to some degree, tho idk if they got as far as running it in production
16:28 Mo__ joined #gluster
16:30 jdarcy I think for a lot of cases you'd be better off with something like lessfs (on the brick side in our case).  Yes, it's FUSE etc. etc. but if you're concerned with space rather than speed that shouldn't be an issue.  In some quick tests I did, it held up surprisingly well.
16:32 cbehm since there seems to be a little more activity on the channel now, does anyone know what causes a sudden 2-3x increase in bandwidth usage by gluster after about 6 hours?
16:32 cbehm if we remount the client side (FUSE) then the bandwidth usage returns to the previous levels
16:33 UnixDev con about btrfs is it seems to still be developing and evolving, at least with zfs on linux there is a spec to follow and the underlying fs design has been proven to scale and work in solaris…
16:33 UnixDev jdarcy: you have any test results vs xfs or anything else?
16:33 jdarcy cbehm: Nothing comes to mind right away.  Are you sure there's not something external (e.g. indexing daemons) generating the load?
16:33 jdarcy UnixDev: I'm sure somebody does, but not me.
16:34 jdarcy cbehm: The other possibility is that it's self-heal activity, which should be visible in the log.
16:35 cbehm jdarcy: it's not self-heal fortunately, the behavior is that it runs at 3-5Mbps for ~6 hours then jumps (and stays) at 2-3x that
16:36 cbehm obviously dependent on how much work is being sent to it, but it's very apparent and very abrupt
16:36 cbehm i don't think it's an external item, but i'll recheck to make sure that it's not that
16:38 jdarcy cbehm: I don't know of anything within GlusterFS, other than self-heal, that would cause that.
16:40 cbehm ok thanks - maybe it's a fuse thing
16:40 UnixDev jdarcy: lessfs seems interesting… don't know if its anything near production though
16:57 jdarcy UnixDev: I don't either, TBH, but that's what testing is for.
16:57 Triade joined #gluster
17:04 Bullardo joined #gluster
17:12 tryggvil joined #gluster
17:19 raghu joined #gluster
17:22 UnixDev I have a volume with replicate 2 that has to bricks. If I remove a brick, format it, then re-add. Will the data be copied back to it automatically?
17:22 UnixDev two bricks**
17:29 hagarth joined #gluster
17:39 jdarcy UnixDev: In 3.3+ it should happen automatically.  Prior to that you'd have to do a find/ls on the volume to make it happen.
17:39 UnixDev ok, I'm running 3.3 from git.. is the master branch the stable one?
17:40 jdarcy Master branch is probably the *least* stable.  That's where commits go first, before they're pulled onto release branches (if they're pulled at all).
17:42 jiqiren joined #gluster
17:55 UnixDev jdarcy: what branch do you recommend?
17:56 semiosis UnixDev: why don't you use packages?
17:58 jdarcy UnixDev: For production, I'd suggest the latest gluster.org packages that aren't beta/qa.  For test/POC I'd say beta are OK.  Wouldn't get more adventurous than that except for actual hands-on code dev.
17:59 jdarcy Personally I do all of my work on git master, but then I'm a core developer.
18:00 jdarcy My motto is "do as I don't"
18:01 Bullardo joined #gluster
18:01 Fabiom when/how does self-heal initiate on 3.3 ?
18:01 jdarcy Fabiom: There's a separate daemon, configured as a client but runs on the servers, to do it.  Various events will trigger it, such as when a server changes state.
18:04 jdarcy Self-heal is the most complicated code in GlusterFS, by the way.  That means its rate of change is highest, so even I often find myself giving slightly outdated answers.
18:04 ctria joined #gluster
18:06 Fabiom jdarcy: Ok. (Replication 2 servers 2 bricks) I took down server2-brick2. Did some file writes. Brought Server2 backup. gluster volume heal [volumename] info shows entries under Server1 that need to be healed. But Server2 shows no entries needing healing. Is this standard behaviour
18:06 tryggvil_ joined #gluster
18:07 jdarcy Fabiom: Sounds like it so far.  The awareness of pending self-heals is always on the survivor(s), not the node(s) that went down.
18:07 jdarcy After all, they're the ones that saw the writes.  The dead nodes didn't.  ;)
18:08 mrobbert left #gluster
18:08 Fabiom jdarcy: That makes sense. Thanks! I need more coffee.
18:10 raghu joined #gluster
18:20 UnixDev semiosis: I had a problem with vmware before, packages were not up to date. Needed to compile from source
18:32 hchiramm_ joined #gluster
18:46 glusterbot New news from resolvedglusterbugs: [Bug 839768] firefox-10.0.4-1.el5_8-x86_64 hang when rendering pages on glusterfs client <https://bugzilla.redhat.com/show_bug.cgi?id=839768>
18:59 UnixDev joined #gluster
19:09 raghu joined #gluster
19:14 elyograg just reading about that new ext4 bug.  my plans for my production gluster involve either fedora 17 or 18a ... looks like fedora is already on the 3.6.2 kernel if it's being kept up to date.  i won't be using ext4 for my bricks, but so far it has been my plan to use it for the OS partitions.
19:14 elyograg s/fedora is/fedora 17 is/
19:14 glusterbot What elyograg meant to say was: just reading about that new ext4 bug.  my plans for my production gluster involve either fedora 17 or 18a ... looks like fedora 17 is already on the 3.6.2 kernel if it's being kept up to date.  i won't be using ext4 for my bricks, but so far it has been my plan to use it for the OS partitions.
19:18 elyograg touch /forcefsck before every reboot probably would avoid it, but that would lead to painful boot times.  in production all servers will have remote access cards so I'll be able to reach the console and fix any problems that come up.. but very ugly.
19:24 Bullardo joined #gluster
19:35 Fabiom joined #gluster
19:38 agwells07142 joined #gluster
19:39 agwells07142 What is the correct way of hooking up Nova-Volume (openstack) with Glusterfs? The OSConnect page goes into the directory I mount it to, but how to I tell Nova-Volume not to try and mount a LVM volume group?
19:39 agwells07142 is there a specific driver I should be using?
20:18 y4m4 joined #gluster
21:21 hattenator joined #gluster
21:33 UnixDev joined #gluster
21:46 Triade joined #gluster
21:48 Triade1 joined #gluster
22:17 JoeJulian agwells07142: I really don't know. I installed with puppet and am using glusterfs-swift packages on centos 6.3 and it just worked. I do need to figure out why, but I can't right now.
22:19 dmachi joined #gluster
22:22 dmachi if I have an application which makes a call to setxattr() on a file residing on a -t glusterfs mount, and then immediately after the set succeeds do list/getxattr() on that same file, does the updated attribute get distributed synchronously or does that happen during a heal (leaving the possibility that the get returns a previous value)?
22:49 rkubany joined #gluster
22:59 bfoster_ joined #gluster
23:00 jdarcy_ joined #gluster
23:00 kkeithley1 joined #gluster
23:04 bfoster joined #gluster
23:09 tryggvil joined #gluster
23:35 bfoster_ joined #gluster
23:35 jdarcy joined #gluster
23:36 kkeithley joined #gluster
23:37 badone joined #gluster
23:47 glusterbot New news from newglusterbugs: [Bug 870256] Samba "store dos attributes" feature doesn't work with GlusterFS. <https://bugzilla.redhat.com/show_bug.cgi?id=870256>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary