Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 plarsen joined #gluster
00:21 atrius joined #gluster
00:21 Rapture joined #gluster
00:23 side_control joined #gluster
00:28 gildub joined #gluster
01:20 osc_khoj joined #gluster
01:56 gnudna joined #gluster
02:00 haomaiwa_ joined #gluster
02:08 harish joined #gluster
02:09 anrao joined #gluster
02:12 lyang0 joined #gluster
02:26 badone_ joined #gluster
02:28 RameshN joined #gluster
02:32 MugginsM joined #gluster
02:36 maveric_amitc_ joined #gluster
02:37 atrius joined #gluster
02:48 nangthang joined #gluster
03:03 bharata-rao joined #gluster
03:29 ppai joined #gluster
03:39 gnudna left #gluster
03:42 overclk joined #gluster
03:47 sripathi joined #gluster
03:58 kdhananjay joined #gluster
04:02 MugginsM joined #gluster
04:05 bala joined #gluster
04:08 kumar joined #gluster
04:13 atinmu joined #gluster
04:19 nbalacha joined #gluster
04:22 RameshN joined #gluster
04:26 itisravi joined #gluster
04:28 kanagaraj joined #gluster
04:43 nbalacha joined #gluster
04:46 Bhaskarakiran joined #gluster
04:49 sage joined #gluster
04:50 deepakcs joined #gluster
04:51 schandra joined #gluster
04:52 sripathi1 joined #gluster
04:54 bala joined #gluster
04:59 rafi joined #gluster
05:01 rafi joined #gluster
05:01 rjoseph joined #gluster
05:08 poornimag joined #gluster
05:09 T3 joined #gluster
05:11 soumya joined #gluster
05:12 dusmant joined #gluster
05:14 lalatenduM joined #gluster
05:17 aravindavk joined #gluster
05:18 ndarshan joined #gluster
05:18 hgowtham joined #gluster
05:23 vijaykumar joined #gluster
05:26 Manikandan joined #gluster
05:26 smohan joined #gluster
05:33 spandit joined #gluster
05:33 vimal joined #gluster
05:33 hgowtham joined #gluster
05:36 pppp joined #gluster
05:39 maveric_amitc_ joined #gluster
05:40 poornimag joined #gluster
05:46 atalur joined #gluster
05:50 kdhananjay joined #gluster
05:55 kotreshhr joined #gluster
05:56 hagarth joined #gluster
05:58 sac joined #gluster
06:01 gem joined #gluster
06:06 Pupeno joined #gluster
06:08 kovshenin joined #gluster
06:10 ashiq joined #gluster
06:10 purpleidea joined #gluster
06:10 purpleidea joined #gluster
06:12 schandra joined #gluster
06:16 anrao joined #gluster
06:17 raghu joined #gluster
06:24 wsirc_4067 joined #gluster
06:27 atalur joined #gluster
06:28 wsirc_4067 has anyone gotten Striped Replicated to work with ESXi over nfs
06:30 deZillium joined #gluster
06:33 nbalacha joined #gluster
06:34 nbalacha joined #gluster
06:34 anil joined #gluster
06:36 schandra joined #gluster
06:37 dusmant joined #gluster
06:40 ghenry joined #gluster
06:40 Manikandan joined #gluster
06:45 rjoseph joined #gluster
06:54 chirino joined #gluster
06:59 T3 joined #gluster
07:00 overclk joined #gluster
07:01 prg3 joined #gluster
07:01 overclk joined #gluster
07:01 smohan joined #gluster
07:01 Philambdo joined #gluster
07:03 dusmant joined #gluster
07:05 glusterbot News from newglusterbugs: [Bug 1207967] heal doesn't work for new volumes to reflect the 128 bits changes in quota after upgrade <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207967>
07:08 deniszh joined #gluster
07:19 Debloper joined #gluster
07:20 purpleidea joined #gluster
07:20 purpleidea joined #gluster
07:22 karnan joined #gluster
07:23 ndevos wsirc_4067: I think some people use distribute-replicate, you should really read the post about ,,(stripe) before picking that variant
07:23 glusterbot wsirc_4067: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
07:27 wsirc_4067 thanks I'm aware of the different types, distribute wouldn't be the best for large files like VM's
07:28 wsirc_4067 I have striped- replicated working over nfs to xen but it won't work correctly to esxi
07:28 T0aD joined #gluster
07:29 wsirc_4067 it mounts in esxi but the log is showing errors of WARNING: NFS: 4031: Short read for object
07:31 karnan_ joined #gluster
07:32 ndevos note that stripe mostly only makes sense if you have files that are bigger then the filesystem on your bricks, if you do not have those, I'd erally go with distribute instead
07:32 badone__ joined #gluster
07:32 ndevos and do you have any errors in the /var/log/gluster/nfs.log ?
07:32 wsirc_4067 no errors on the nfs.log side
07:34 ndevos anywhing that you can do to get more verbose messages on the esxi side?
07:34 ndevos "short read for object" is very little information to go on
07:35 glusterbot News from newglusterbugs: [Bug 1207979] BitRot :- In case of NFS mount, Object Versioning and file signing is not working as expected <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207979>
07:35 harish joined #gluster
07:39 poornimag joined #gluster
07:40 wsirc_4067 if I have a large file, in distribute it would only be placed on one brick and not spread across all bricks as in a stripe, isn't that correct?
07:40 wsirc_4067 )WARNING: NFS: 4031: Short read for object b00f 44 b08baeda 8a56598a 4c474f3a 581cb822 d9428cf3 6f95a4be 2f41ebeb 1db98223 3f8ce880b8478268 c1b3f5cd 0 0 offset: 0x0 requested: 0x200 read: 0x1c
07:40 wsirc_4067 not sure that helps much
07:41 wsirc_4067 I can write to the nfs without issues but I can't read
07:42 wsirc_4067 I can see files and directories but cp or cat a file gives read error: Input/output error
07:42 nbalacha joined #gluster
07:42 wsirc_4067 this is on the esxi box
07:43 overclk joined #gluster
07:43 wsirc_4067 xenserver can mount and read/write without issue
07:44 ndevos well, that helps a little, the "object" looks like a file-handle, and there was a READ request of 0x200 bytes, but only 0x1c bytes got returned by the nfs-server
07:44 deZillium joined #gluster
07:46 ndevos can you read just some bytes from that file, like with: dd if=/path/to/vm/img/file of=/dev/null bs=8 count=1
07:47 fsimonce joined #gluster
07:48 hchiramm joined #gluster
07:54 wsirc_4067 smaller files no the dd fails but on large files I can read some bytes
07:57 rjoseph joined #gluster
07:58 atalur joined #gluster
07:58 chirino joined #gluster
07:59 purpleidea joined #gluster
07:59 purpleidea joined #gluster
08:01 Slashman joined #gluster
08:01 nishanth joined #gluster
08:02 wsirc_4067 if i read to the end of the larger files they will fail as well
08:06 liquidat joined #gluster
08:09 Manikandan joined #gluster
08:10 poornimag joined #gluster
08:17 Norky joined #gluster
08:19 ctria joined #gluster
08:32 ktosiek joined #gluster
08:35 glusterbot News from newglusterbugs: [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075417>
08:43 fattaneh1 joined #gluster
08:46 gem joined #gluster
08:48 ndevos wsirc_4067: and you an read those failing files on an other (non esxi) nfs-client withoug errors?
08:48 T3 joined #gluster
08:49 bala1 joined #gluster
08:51 anrao joined #gluster
08:52 dusmant joined #gluster
08:52 fattaneh1 left #gluster
08:52 ndarshan joined #gluster
08:53 nishanth joined #gluster
08:54 pkoro joined #gluster
08:57 wsirc_4067 yes
08:59 DV__ joined #gluster
09:01 Leildin joined #gluster
09:02 Slashman joined #gluster
09:05 jiku joined #gluster
09:06 m0ellemeister joined #gluster
09:08 kshlm joined #gluster
09:08 Manikandan joined #gluster
09:09 Bhaskarakiran joined #gluster
09:12 dusmant joined #gluster
09:17 DV__ joined #gluster
09:20 ndarshan joined #gluster
09:21 rjoseph joined #gluster
09:22 mbukatov joined #gluster
09:24 monotek joined #gluster
09:28 nishanth joined #gluster
09:40 hgowtham joined #gluster
09:55 danny__ joined #gluster
09:57 Manikandan joined #gluster
09:57 danny__ Does anyone know if it's possible in AWS to take an EBS volume containing data (that isn't already part of a Gluster dataset), snapshot it then spin up a copy of that EBS volume on 3 Gluster servers then have Gluster "figure out" that they all match already once a volume is created with them?
09:58 ndevos wsirc_4067: sounds like an issue with the esxi nfs-client, I would not know how to fix that...
09:59 ndevos danny__: I dont know if it works, but it sounds as if it could work
10:00 danny__ ndevos: I know it works when you do it with ebs volumes that were already part of a gluster data set, just wasnt sure if it work work this way or not, i guess I'll jujst have to test it
10:01 Pupeno joined #gluster
10:02 rjoseph joined #gluster
10:04 ndevos danny__: you should be able to do create the volume with the contents in it, and then somehow run self-heal to create the .glusterfs directory, clone after that and add the 2 other EBS to the volume
10:05 ndevos you would start with a one-brick volume, and then expand it... at least, that is what I would try
10:05 danny__ ok thanks
10:05 ira joined #gluster
10:10 o5k joined #gluster
10:16 anrao joined #gluster
10:23 dusmant joined #gluster
10:24 nbalacha joined #gluster
10:27 Manikandan joined #gluster
10:29 Pupeno joined #gluster
10:30 aravindavk joined #gluster
10:31 and` joined #gluster
10:31 atrius joined #gluster
10:35 glusterbot News from newglusterbugs: [Bug 1208067] [SNAPSHOT]: Snapshot create fails while using scheduler to create snapshots <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208067>
10:35 glusterbot News from newglusterbugs: [Bug 1199075] iobuf: Ref should be taken on iobuf through proper functions. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1199075>
10:37 T3 joined #gluster
10:38 and` joined #gluster
10:50 harish_ joined #gluster
10:50 poornimag joined #gluster
10:56 wsirc_4067 the nfs esxi problem only exists if you create a volume with stripe, it works fine with a distribute
10:57 bfoster joined #gluster
10:57 wsirc_4067 it has to be a gluster issue
10:59 ndevos wsirc_4067: it would be interesting to see two tcpdumps of the same file on your stripe-replicated volume, one with the trace from esxi and one from a a working nfs-client
11:01 ndevos wsirc_4067: like, capture the tcpdump on the nfs-server, with something like 'tcpdump -i any -s 0 -w /var/tmp/esxi-trace.pcap tcp and not port 22'
11:01 ndevos wsirc_4067: best would be to have the whole mounting process captured too, so start the tcpdump and then do a mount and access on the esxi server
11:02 ndevos wsirc_4067: of course, it would be most efficient if there is no other gluster/nfs traffic on that nfs-server while you capture it
11:02 ndevos and, once the esxi test has been done, do the same test from the working nfs-client
11:03 ndevos those .pcap files should get gzipped, and you can then attach those to ,,(fileabug)
11:03 glusterbot Please file a bug at http://goo.gl/UUuCq
11:05 karnan joined #gluster
11:06 glusterbot News from newglusterbugs: [Bug 1117888] Problem when enabling quota : Could not start quota auxiliary mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117888>
11:06 glusterbot News from newglusterbugs: [Bug 1134050] Glfs_fini() not freeing the resources <https://bugzilla.redhat.co​m/show_bug.cgi?id=1134050>
11:06 glusterbot News from newglusterbugs: [Bug 1192075] libgfapi clients hang if glfs_fini is called before glfs_init <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192075>
11:06 glusterbot News from newglusterbugs: [Bug 1208079] Getting ENOENT instead of EDQUOTE when limit execeeds in disperse volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208079>
11:06 glusterbot News from newglusterbugs: [Bug 1166862] rmtab file is a bottleneck when lot of clients are accessing a volume through NFS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1166862>
11:08 wsirc_4067 I wonder if the issue is related to something like https://bugzilla.redhat.co​m/show_bug.cgi?id=1132392
11:08 glusterbot Bug 1132392: low, high, 3.4.6, ndevos, MODIFIED , NFS interoperability problem: stripe-xlator removes EOF at end of READDIR
11:09 ndevos something like that is possible, but that problem is related to READDIR, like 'ls'
11:09 nbalacha joined #gluster
11:10 ricky-ticky joined #gluster
11:11 poornimag joined #gluster
11:13 firemanxbr joined #gluster
11:14 soumya joined #gluster
11:15 Pupeno joined #gluster
11:15 Pupeno joined #gluster
11:15 rjoseph joined #gluster
11:16 kanagaraj joined #gluster
11:17 kkeithley joined #gluster
11:20 RameshN joined #gluster
11:23 Pupeno joined #gluster
11:24 atalur joined #gluster
11:25 bennyturns joined #gluster
11:26 soumya joined #gluster
11:31 T3 joined #gluster
11:33 tuxle joined #gluster
11:33 bennyturns joined #gluster
11:33 kanagaraj joined #gluster
11:39 anil joined #gluster
11:41 tuxle joined #gluster
11:43 tuxle joined #gluster
11:43 hgowtham joined #gluster
11:43 tuxle hi all
11:43 tuxle is there a way to force a replication sync?
11:43 rjoseph joined #gluster
11:44 tuxle i had to replace a brick, but the new one is only holding new files or files somebody requested on the new brick
11:44 tuxle shouldn't gluster sync them as soon as i add the second brick?
11:47 pppp ndevos: ping
11:47 glusterbot pppp: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
11:52 sac tuxle, try gluster volume heal <volumename> full
11:52 sac To start the sync (volume heal) ...
11:53 sac And gluster volume heal <volume-name> info (To check the status)
11:53 sac tuxle, ^^
11:53 tuxle okay, i started it
11:54 sac Cool.
11:54 tuxle sac: gluster volume heal virtStorage info tells me "Number of entries: 43" for the original brick an "Number of entries: 0" for the new one
11:54 tuxle sac: do i just monitor it now?
11:54 sac tuxle, how did you add the new brick? Can you please give me some context?
11:55 tuxle sure
11:55 sahina joined #gluster
11:55 sac tuxle, if you added a new brick altogether then you may have to make sure you set up your backend directory right.
11:55 tuxle i created the cluster with gluster create volume virtStorage replica 2 gl1:/brick/virtStorge gl2:/brick/virtStorage
11:56 tuxle then some idiot trashed the gl1
11:56 tuxle and after i reinstalled gl1 i added the peers again
11:56 sac And how did you set up the backend?
11:56 tuxle with xfs
11:57 tuxle the remove-brick replica 1 gl1:/brick/...
11:57 tuxle and detach peer worked
11:57 tuxle then adding everything went well
11:58 sac tuxle, check your backend directory... getfattr -d -e hex -m. <your backend directory>
11:58 sac Refer: http://www.gluster.org/community/docum​entation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server
11:59 tuxle # file: brick/virtStorage
11:59 tuxle security.selinux=0x756e636f6e66696e65645f753a6f​626a6563745f723a686f6d655f726f6f745f743a733000
11:59 tuxle trusted.afr.virtStorage-client​-0=0x000000000000000000000000
11:59 tuxle trusted.afr.virtStorage-client​-1=0x000000000000000000000000
11:59 tuxle trusted.gfid=0x00000000000000000000000000000001
11:59 tuxle trusted.glusterfs.dht=0x0000​00010000000000000000ffffffff
11:59 tuxle trusted.glusterfs.volume-id=0xc​fabc8ac392c4e83bf11e210546a6303
12:00 sac tuxle, And the volume-id on another machine matches this I guess.
12:00 Manikandan joined #gluster
12:01 sac If so, then things look good.
12:01 sac gluster volume heal full should heal your files.
12:02 ppai joined #gluster
12:04 jdarcy joined #gluster
12:06 glusterbot News from newglusterbugs: [Bug 1208097] [SNAPSHOT] : Appending time stamp to snap name while using scheduler to create snapshots should be removed. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208097>
12:10 soumya joined #gluster
12:18 monotek1 joined #gluster
12:33 rjoseph joined #gluster
12:34 LebedevRI joined #gluster
12:36 hgowtham joined #gluster
12:37 T3 joined #gluster
12:43 B21956 joined #gluster
12:44 osc_khoj joined #gluster
12:44 bene2 joined #gluster
12:46 diegows joined #gluster
12:47 dgandhi joined #gluster
12:48 tuxle sac: yes, the volume-ids match
12:48 tuxle sac: heal info is now only showing 3 files for the first one an 1 file for the second brick
12:49 tuxle sac: will this go to zero after it finished?
12:49 sac tuxle, yes.
12:49 sac Now check in your backend if files are there as expected.
12:50 lalatenduM joined #gluster
12:51 Manikandan joined #gluster
12:51 hgowtham joined #gluster
12:51 wkf joined #gluster
12:51 dockbram joined #gluster
12:52 ashiq joined #gluster
12:54 tuxle sac: checking
12:55 aravindavk joined #gluster
12:56 rjoseph joined #gluster
12:58 Gill joined #gluster
12:59 lpabon joined #gluster
13:02 kotreshhr joined #gluster
13:06 glusterbot News from newglusterbugs: [Bug 1208123] Warning messages seen while installing glusterfs on rhel7.1 "Non-fatal POSTIN scriptlet failure in rpm package.. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208123>
13:06 glusterbot News from newglusterbugs: [Bug 1208124] BitRot :- checksum value stored in xattr is different than actual value for some file (checksum is truncated if it has terminating character as part of checksum itself) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208124>
13:06 glusterbot News from newglusterbugs: [Bug 1208118] gf_log_inject_timer_event can crash if the passed ctx is null. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208118>
13:07 vipulnayyar joined #gluster
13:07 julim joined #gluster
13:08 DV__ joined #gluster
13:09 bala joined #gluster
13:10 Creeture joined #gluster
13:14 Creeture joined #gluster
13:15 T3 joined #gluster
13:18 Creeture joined #gluster
13:25 hagarth joined #gluster
13:28 georgeh-LT2 joined #gluster
13:29 Creeture joined #gluster
13:31 nbalacha joined #gluster
13:35 gnudna joined #gluster
13:35 hgowtham joined #gluster
13:35 gnudna left #gluster
13:36 Manikandan joined #gluster
13:36 glusterbot News from newglusterbugs: [Bug 1208131] BitRot :- Tunable (scrub-throttle, scrub-frequency, pause/resume) for scrub functionality don't have any impact on scrubber <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208131>
13:36 glusterbot News from newglusterbugs: [Bug 1208134] [nfs]: copy of regular file to nfs mount fails with "Invalid argument" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208134>
13:36 kanagaraj joined #gluster
13:42 RameshN joined #gluster
13:46 diegows joined #gluster
13:47 ctria joined #gluster
13:49 harish_ joined #gluster
13:51 smohan_ joined #gluster
13:52 and` joined #gluster
13:57 chirino joined #gluster
13:58 and` joined #gluster
14:00 tuxle sac: it is strange, the second brick reports about 90GB less data usage
14:01 tuxle i am checking now with sha256sum
14:01 hamiller joined #gluster
14:01 coredump joined #gluster
14:03 tuxle should i disable the heal process after it finishes?
14:07 vipulnayyar joined #gluster
14:10 meghanam joined #gluster
14:13 sac tuxle, no need.
14:13 tuxle sac, cool
14:14 tuxle not it is just runnning quiet
14:14 tuxle every now an then it shows me a disk of a vm, but i guess this is normal
14:14 tuxle the vm appears on both lists
14:15 sac tuxle, if the files are huge it takes a while.
14:15 _Bryan_ joined #gluster
14:16 tuxle this is ok for me
14:16 tuxle i am more concerned, about the 80GB difference in size
14:16 tuxle sac, is it possible to save 80GB on disc compared to the gluster volume?
14:17 sac tuxle, no.
14:17 wushudoin joined #gluster
14:17 sac tuxle, it is in the process of healing...
14:18 sac Once done the sizes should match.
14:18 tuxle the ioload went down 2 minutes ago
14:18 tuxle i am a little confused, because df -h tells me usage 442GB
14:19 tuxle and du -h /brick or df -h /virtStorage tell me 366GB
14:20 tuxle so I wonder if the second machine stopped healing...
14:25 tuxle sac, does gluster handle hole punching?
14:25 tuxle the difference would match the unallocated space of the VMs
14:25 sac tuxle, you are talking about sparse files?
14:26 tuxle sac, yes
14:28 Gill_ joined #gluster
14:33 jmarley joined #gluster
14:33 jmarley joined #gluster
14:34 kotreshhr left #gluster
14:37 roost joined #gluster
14:39 Gill_ joined #gluster
14:42 eugenewrayburn joined #gluster
14:44 plarsen joined #gluster
14:47 eugenewrayburn Sorry, I haven't used IRC before.  I found the issue with my non-error, not mounting.  It's a race condition bug in the mount.glusterfs script.  The user list solved it here:  http://www.gluster.org/pipermail/glust​er-users.old/2015-January/020367.html  This thread has a lot of great debugging commands for gluster client mounting.  So, I added a sleep in the mount.glusterfs script.
14:50 kdhananjay joined #gluster
14:57 purpleidea joined #gluster
14:57 purpleidea joined #gluster
15:03 tuxle eugenewrayburn: did you solve your problem?
15:03 RameshN joined #gluster
15:04 eugenewrayburn Yes, adding a sleep to the mount script solved the problem.  There is a patch here:  https://bugzilla.redhat.co​m/show_bug.cgi?id=1151696
15:04 glusterbot Bug 1151696: unspecified, unspecified, ---, bugs, NEW , mount.glusterfs fails due to race condition in `stat` call
15:07 glusterbot News from newglusterbugs: [Bug 1151696] mount.glusterfs fails due to race condition in `stat` call <https://bugzilla.redhat.co​m/show_bug.cgi?id=1151696>
15:08 sac tuxle, yes.
15:09 eugenewrayburn tuxle: Yes, I fixed it by adding a sleep to the mount.glusterfs script.  Thanks.
15:10 tuxle sac, so I would be able to reduce the storage if I migrated every now and then to a new brick? ^^
15:12 sac tuxle, you mean you add a brick and increase the size of the volume?
15:12 penglish joined #gluster
15:12 tuxle sac, I am currently thinking about a replace-scenario with 3 or more machines
15:13 tuxle after every replication, the sparse files would reduce their footprint on disk
15:14 kovshenin joined #gluster
15:14 tuxle so every run frees some space
15:20 jmarley joined #gluster
15:32 jackdpeterson joined #gluster
15:36 anrao joined #gluster
15:38 chirino joined #gluster
15:44 nbalacha joined #gluster
15:44 o5k_ joined #gluster
15:53 purpleidea joined #gluster
15:55 monotek1 joined #gluster
15:57 deepakcs joined #gluster
16:02 kshlm joined #gluster
16:11 o5k__ joined #gluster
16:19 vimal joined #gluster
16:19 eugenewrayburn left #gluster
16:40 _Bryan_ joined #gluster
16:40 soumya_ joined #gluster
16:41 ira joined #gluster
16:53 deniszh joined #gluster
16:55 side_control joined #gluster
17:09 o5k_ joined #gluster
17:10 thogue joined #gluster
17:20 Rapture joined #gluster
17:23 maveric_amitc_ joined #gluster
17:37 diegows joined #gluster
17:39 hagarth joined #gluster
17:42 lalatenduM joined #gluster
17:57 o5k joined #gluster
18:00 purpleidea joined #gluster
18:07 brianw_ joined #gluster
18:16 jdhiser joined #gluster
18:18 jdhiser Hi All, I'm having problems with "sudo gluster volume start <vol>" reporting a failure to start the volume, with little/no log information in /var/log/glusterfs/ that's helpful (to me, maybe it's instructive to someone with more knowledge.)  Is this an appropriate place to ask questions?
18:21 purpleidea joined #gluster
18:22 diegows joined #gluster
18:22 vipulnayyar joined #gluster
18:22 JoeJulian Sure is. Please use fpaste.org to paste your client log and paste the link generated.
18:24 jdhiser here's the relevant part of my cli.log:  http://ur1.ca/k31td
18:33 penglish joined #gluster
18:34 JoeJulian Check /var/log/glusterfs/etc-glusterd.vol.log on all your servers.
18:35 jdhiser the file exists and has previous log information in it, but gets no new entries when i attempt a volume start.
18:36 penglish joined #gluster
18:36 JoeJulian Is glusterd running on all your servers?
18:37 jdhiser Yes, I've also tried "sudo /etc/init.d/glusterfs-server restart" on all servers, which claims success.
18:37 JoeJulian selinux?
18:37 jdhiser I don't _think_ so, this is part of an openstack distro that someone else setup.
18:38 JoeJulian Let's try a peer status and a volume info
18:38 jdhiser http://ur1.ca/k31ys
18:42 purpleidea joined #gluster
18:48 jdhiser pretty sure this is _not_ selinux, as /usr/sbin/semanage, etc. are not installed.
18:51 JoeJulian jdhiser: Well that info was pretty unhelpful.... :/
18:51 jdhiser Yes, I know.  :(
18:52 JoeJulian First, try restarting both glusterd.
18:53 papamoose joined #gluster
18:53 JoeJulian If that doesn't work, on 35 run glusterd --debug and try starting it.
18:54 jdhiser mtx35 restarted, mtx-33 claims failure now..
18:54 jdhiser but i don't have a brick on mtx-33 anyhow.
18:55 o5k_ joined #gluster
18:56 jdhiser http://ur1.ca/k325z
19:02 JoeJulian All members of the trusted pool have to be able to resolve the hostnames of all the bricks.
19:02 JoeJulian I think that's the problem here.
19:04 jdhiser all bricks are on mega-techx35.maas.  the log on mtx33 show the mapping to the name mtx35 to it's ip (192.168.6.92), correctly. but then says it's unable to find friend.
19:07 m0ellemeister joined #gluster
19:07 semiosis we need to see the glusterd log file, etc-glusterfs-glusterd.log
19:07 jdhiser from which server?
19:07 semiosis the one where you ran volume start & got a failure
19:07 jdhiser http://ur1.ca/k325z
19:07 glusterbot News from newglusterbugs: [Bug 1208255] 'volume heal $volname info' shows FQDNs instead of provided hostnames for nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208255>
19:07 jdhiser that's from mtx33.  let me get the other one.
19:09 semiosis why only one server?  you could just use mdadm or lvm
19:09 jdhiser the 2nd machine with the other 6 drives needs to be reprovisioned, and the guy with the keys to the machine is out today. :)
19:10 jdhiser i figured I could add bricks later.
19:10 semiosis planning to move the replicas off to the other server?
19:10 rotbeard joined #gluster
19:11 jdhiser i guess i was hoping that when the 2nd server was online with 6 more bricks, that it'd replicate OK.
19:11 jdhiser mabye that was naive.
19:11 vipulnayyar joined #gluster
19:11 semiosis you certainly could do that
19:12 jdhiser this is my first time setting up gluster, so I'm still learning.
19:12 semiosis but if you're just starting out it would be easier to start with your desired config, rather than have to reconfigure
19:12 tg2 hey gents
19:12 tg2 is replace-brick faster/better than doing a remove-brick start and add-brick on the new one?
19:17 aravindavk joined #gluster
19:18 HotTopic joined #gluster
19:19 HotTopic left #gluster
19:28 plarsen joined #gluster
19:35 purpleidea joined #gluster
19:35 purpleidea joined #gluster
19:43 Gill joined #gluster
19:44 _Bryan_ joined #gluster
19:51 anrao joined #gluster
20:06 social joined #gluster
20:07 jdhiser well, i got my volume started (thanks for the help guys!), and mounted on one machine (thanks again, guys!), but the ultimate goal was to get the volume mount on a QEMU VM based off debian/wheezy.  install of gluster goes OK, but the peer-probe seems to hang.  Is there a FAQ about that somewhere?
20:10 jdhiser i've faced networking issues on this machine before.  I see in etc-glusterfs-glusterd.vol.log that there's lots of errors about transport endpoint is not connected, peer  127.0.0.1
20:13 jdhiser i can ping/ssh to 127.0.0.1 (obviously) and to the server where the volume is shared..
20:13 jdhiser do I need to even succeed in a peer probe before mounting, or can I just mount w/o a peer probe?
20:30 JoeJulian xaeth: April fool!
20:30 xaeth JoeJulian, :)
20:41 purpleidea joined #gluster
20:41 purpleidea joined #gluster
20:42 DV joined #gluster
20:49 o5k joined #gluster
20:50 kovshenin joined #gluster
21:35 MugginsM joined #gluster
21:36 jdhiser left #gluster
21:54 wkf joined #gluster
21:57 T0aD joined #gluster
22:01 _Bryan_ joined #gluster
22:02 badone__ joined #gluster
22:04 Gill joined #gluster
22:16 purpleidea joined #gluster
22:20 ira joined #gluster
22:20 T3 joined #gluster
22:36 purpleidea joined #gluster
22:51 jaank joined #gluster
22:57 deniszh joined #gluster
23:04 _Bryan_ joined #gluster
23:15 purpleidea joined #gluster
23:15 purpleidea joined #gluster
23:21 T3 joined #gluster
23:25 deniszh1 joined #gluster
23:32 purpleidea joined #gluster
23:40 plarsen joined #gluster
23:51 MugginsM joined #gluster
23:54 davidbitton joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary