Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 corretico joined #gluster
00:13 RicardoSSP joined #gluster
00:13 RicardoSSP joined #gluster
00:36 theron joined #gluster
00:40 gildub joined #gluster
00:58 wkf joined #gluster
01:14 DV joined #gluster
01:27 DV_ joined #gluster
01:46 Pupeno_ joined #gluster
01:51 harish joined #gluster
02:07 hagarth joined #gluster
02:33 kdhananjay joined #gluster
02:33 kshlm joined #gluster
02:36 harish joined #gluster
02:37 nangthang joined #gluster
02:50 hagarth joined #gluster
02:50 corretico joined #gluster
02:54 nangthang joined #gluster
03:10 DV_ joined #gluster
03:40 gem joined #gluster
03:48 shubhendu joined #gluster
03:48 kumar joined #gluster
03:50 gem_ joined #gluster
03:53 karnan joined #gluster
03:53 nbalacha joined #gluster
03:55 gem joined #gluster
03:57 atinmu joined #gluster
04:04 sage joined #gluster
04:05 gem joined #gluster
04:11 gem joined #gluster
04:11 badone_ joined #gluster
04:15 kkeithley1 joined #gluster
04:27 karnan joined #gluster
04:27 jiffin joined #gluster
04:27 bharata-rao joined #gluster
04:29 RioS2 joined #gluster
04:36 meghanam joined #gluster
04:37 kotreshhr joined #gluster
04:39 deepakcs joined #gluster
04:43 woakes070048 joined #gluster
04:45 woakes070048 Hey guys over the last few day i have built a 3 node cluster and i sismulated a power failure on all three nodes and now i cant get the hosted engine to restart... I have been looking at the logs and cant find what is wrong
04:47 kanagaraj joined #gluster
04:50 ppai joined #gluster
04:51 schandra joined #gluster
04:51 rafi joined #gluster
04:52 DV joined #gluster
04:55 Bhaskarakiran joined #gluster
04:57 woakes070048 joined #gluster
05:02 hagarth joined #gluster
05:06 ndarshan joined #gluster
05:08 pppp joined #gluster
05:09 haomaiwa_ joined #gluster
05:21 sakshi joined #gluster
05:21 RameshN joined #gluster
05:26 karnan joined #gluster
05:27 corretico joined #gluster
05:28 atalur joined #gluster
05:30 lalatenduM joined #gluster
05:33 nishanth joined #gluster
05:36 glusterbot News from newglusterbugs: [Bug 1211123] ls command failed with features.read-only on while mounting ec volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211123>
05:37 kdhananjay joined #gluster
05:38 ashiq joined #gluster
05:44 nbalacha joined #gluster
05:50 hagarth joined #gluster
05:52 maveric_amitc_ joined #gluster
05:56 anil joined #gluster
05:56 smohan joined #gluster
06:00 overclk joined #gluster
06:01 soumya joined #gluster
06:01 ashiq joined #gluster
06:02 dtrainor joined #gluster
06:03 huleboer joined #gluster
06:03 hchiramm_ joined #gluster
06:03 dtrainor Hi.  I made a mistake, not sure what I was thinking.  I reinstalled a filesystem here at home, over the root drive.  I didn't touch the bricks while I did this.  I created backups of some other data and configs in to the volume....... which exists in Gluster, which I can't access, because this is a new install.
06:03 dtrainor Anyone mind giving me some pointers?
06:06 glusterbot News from resolvedglusterbugs: [Bug 1208134] [nfs]: copy of regular file to nfs mount fails with "Invalid argument" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208134>
06:07 dtrainor I'm afraid to add bricks, I'm afraid they'll be overwritten?  Unfamiliar territory.
06:10 dusmant joined #gluster
06:12 dtrainor there's gotta be a way to rebuild an entire volume if the bricks are all restored on the system and added properly...
06:16 Manikandan joined #gluster
06:16 Manikandan_ joined #gluster
06:18 maveric_amitc_ joined #gluster
06:19 huleboer joined #gluster
06:26 jtux joined #gluster
06:28 JoeJulian dtrainor: If it was only one root you killed, you can rsync almost everything from another server.
06:29 JoeJulian dtrainor: the only differences are peers, take out the peer for *this* server. One will be missing, you can get that from the *other* server's glusterd.info and you can set the uuid in *this* server's uuid from the peer file you had to delete.
06:29 JoeJulian dtrainor: /var/lib/glusterd
06:30 JoeJulian and with that, I'm going to bed.
06:30 dtrainor I bet I could, yep.  I had a 2x2 volume, just not sure which files were on which drives, and I don't have a whole lot of space to copy to, temprarily.  Also, this was just one gluster server.
06:30 dtrainor Yes, I could do that, if I had another peer :)  Thanks for the ideas.
06:30 dtrainor I don't know what I was thinking.
06:30 JoeJulian Oh, well then just build the volume the same as it was.
06:30 JoeJulian You'll have to resolve the "path or prefix" error
06:31 JoeJulian @path or prefix
06:31 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
06:31 dtrainor I would, yep.  I read your blog.
06:31 JoeJulian But it's safe to do. Gluster's really good about not deleting stuff.
06:31 JoeJulian If it's not sure, it defaults to leaving things alone.
06:32 dtrainor I'm not entirely sure which type of volume setup I had... that would be important.
06:33 JoeJulian There are ways to figure it out, looking at which files are on which bricks and looking at ,,(extended attributes)
06:33 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
06:34 JoeJulian I have to go to sleep, so if you're around in 8-9 hours I can tell you step by step then. I just arrived back home and am exhausted.
06:34 dtrainor Sorry for keeping you up, I appreciate your help.
06:34 oxae joined #gluster
06:34 dtrainor sleep well.
06:35 sripathi joined #gluster
06:37 glusterbot News from newglusterbugs: [Bug 1211132] 'volume get' invoked on a non-existing key fails with zero as a return value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211132>
06:37 jtux joined #gluster
06:43 gem joined #gluster
06:45 hgowtham joined #gluster
06:45 shubhendu joined #gluster
06:49 ndarshan joined #gluster
06:53 aravindavk joined #gluster
06:54 ashiq joined #gluster
06:58 RaSTar joined #gluster
06:58 lkoranda joined #gluster
07:00 nbalacha joined #gluster
07:01 nshaikh joined #gluster
07:01 itpings joined #gluster
07:01 itpings hey guys
07:02 itpings created and uploaded new video of Gluster
07:02 itpings here is the link
07:02 itpings https://www.youtube.com/watch?v=NYGn7sgMrMw
07:07 glusterbot News from resolvedglusterbugs: [Bug 1139986] DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139986>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1146470] The memories are exhausted quickly when handle the message which has multi fragments in a single record <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146470>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1151397] DHT: Rebalance process crash after add-brick and `rebalance start' operation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1151397>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1160712] libgfapi: use versioned symbols in libgfapi.so for compatibility <https://bugzilla.redhat.co​m/show_bug.cgi?id=1160712>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1166278] backport fix for bug 1010241 to 3.4 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1166278>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1154714] GlusterFS 3.4.7 Tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1154714>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1155630] GlusterFS allows insecure SSL modes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155630>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1205715] log files get flooded when removexattr() can't find a specified key or value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1205715>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1206120] DHT: nfs.log getting filled with "I" logs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206120>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1056085] logs flooded with invalid argument errors with quota enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1056085>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1151308] data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back <https://bugzilla.redhat.co​m/show_bug.cgi?id=1151308>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1139250] vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139250>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1139984] DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139984>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1132392] NFS interoperability problem: stripe-xlator removes EOF at end of READDIR <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132392>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1158923] glusterfs logrotate config file pollutes global config <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158923>
07:07 glusterbot News from resolvedglusterbugs: [Bug 1201898] Building argp-standalone with gcc-5 (i.e. on Fedora 22 and 23/rawhide) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201898>
07:13 deniszh joined #gluster
07:25 soumya joined #gluster
07:32 [Enrico] joined #gluster
07:32 fsimonce joined #gluster
07:37 glusterbot News from resolvedglusterbugs: [Bug 1139988] DHT :- data loss - file is missing on renaming same file from multiple client at same time <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139988>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1139992] DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139992>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1139995] [DHT:REBALANCE]: Rebalance failures are seen with error message  " remote operation failed: File exists" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139995>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1139996] DHT: NFS process crashed on a node in a cluster when another storage node in the cluster went offline <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139996>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1144792] Very high memory usage during rebalance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1144792>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1119894] Glustershd memory usage too high <https://bugzilla.redhat.co​m/show_bug.cgi?id=1119894>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1116514] iobuf_unref errors killing logging <https://bugzilla.redhat.co​m/show_bug.cgi?id=1116514>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1139997] rebalance is not resulting in the hash layout changes being available to nfs client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139997>
07:37 glusterbot News from resolvedglusterbugs: [Bug 1139998] Renaming file while rebalance is in progress causes data loss <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139998>
07:44 woakes070048 itpings, hey great video
07:51 ndarshan joined #gluster
07:54 shubhendu joined #gluster
07:54 itpings ty
07:55 RameshN joined #gluster
07:56 liquidat joined #gluster
07:57 anekkunt joined #gluster
07:58 anekkunt joined #gluster
07:59 corretico joined #gluster
08:03 Slashman joined #gluster
08:05 Guest9716 joined #gluster
08:07 glusterbot News from newglusterbugs: [Bug 1187372] Samba "use sendfile" is incompatible with GlusterFS libgfapi vfs_glusterfs. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1187372>
08:07 glusterbot News from newglusterbugs: [Bug 1177411] use of git submodules blocks automatic ubuntu package builds <https://bugzilla.redhat.co​m/show_bug.cgi?id=1177411>
08:07 lyang0 joined #gluster
08:09 pppp joined #gluster
08:10 Philambdo joined #gluster
08:12 hagarth joined #gluster
08:12 ctria joined #gluster
08:16 DV joined #gluster
08:24 RameshN joined #gluster
08:32 ndevos itpings: please send the link to the video to the gluster-users and gluster-devel mailinglist - it'll get lost here
08:32 * ndevos will watch it later, not sure if he can find the time for it today
08:33 getup joined #gluster
08:33 itpings i uploaded it to wiki
08:33 itpings i mean i posted it on gluster wiki
08:34 ndevos itpings: thanks, that definitely is helpful too!
08:34 ndevos itpings: I think hchiramm_ can post the link in the gluster twitter feed, and maybe in facebook too
08:36 itpings great
08:36 itpings here is the post
08:36 itpings I have created a video on How to create a two node replication cluster with Gluster. Here is the video link https://www.youtube.com/watch?v=NYGn7sgMrMw
08:37 glusterbot News from newglusterbugs: [Bug 1206539] Tracker bug for GlusterFS documentation Improvement. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206539>
08:37 glusterbot News from resolvedglusterbugs: [Bug 1210934] qcow2 image creation using qemu-img hits segmentation fault <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210934>
08:37 ndevos itpings: do you have a twitter account? that way hchiramm_ can just retweet it?
08:38 itpings http://www.gluster.org/community/do​cumentation/index.php/User:Itpings
08:38 itpings no i dont use twitter
08:38 itpings facebook for marketing my youtube video channel
08:39 ndevos okay, that should be good, did you post the video on facebook yet?
08:39 itpings only on linux channels
08:39 itpings because i create linux videos
08:39 itpings linux king is my channel
08:39 soumya joined #gluster
08:40 itpings and i did spread the word on google + as well
08:41 itpings By the way you are free to do what ever you like with this video Ndevos ...i am glad i could help the community some how !
08:41 itpings Next plan is to make some more videos which will help people understand Gluster easily
08:43 ndevos itpings: that sounds awesome, you may want to talk to tigert about getting those videos included on the gluster.org site somewhere
08:43 ndevos itpings: I cant find you (or at least no videos) in facebook, got a link?
08:43 itpings i don't know tigrt
08:43 itpings just a min
08:43 itpings i will give you links
08:44 itpings https://www.youtube.com/chan​nel/UC-f_LxJoe6sgBIEIkHAM8mA
08:44 itpings youtube channel
08:44 itpings Now on Facebook
08:44 ktosiek joined #gluster
08:45 itpings https://www.facebook.com/groups/LZHProject/
08:45 crashmag joined #gluster
08:45 woakes070048 joined #gluster
08:45 itpings Direct link to video https://www.youtube.com/watch?v=NYGn7sgMrMw
08:46 * tigert looks
08:46 ndevos itpings: and G+ ?
08:47 itpings just a min
08:47 tigert I guess someone could post them as a blog post
08:47 tigert which pushes them to fb/twitter?
08:47 * tigert is a bit scared of that setup for now :)
08:48 itpings https://plus.google.com/u/0/​103747461418699057891/posts
08:48 ndevos thanks!
08:48 itpings guys i created it for Gluster community so you are free to do what ever you like...And thanks for the support
08:49 ndevos very much appreciated, itpings!
08:49 ira joined #gluster
08:50 itpings likewise
08:51 itpings ok i have one question if you could answer it will help my concept
08:52 itpings and that will be my next video as well
08:53 itpings so here is the scenario: i created a vol gv0 but now the problem is i want to replace it with gv1 which is created in another directory...Now if i only use vol replace would it help ? and gv0 will be replaced with gv1 and the directories as well or not ?
08:53 Norky joined #gluster
08:58 vimal joined #gluster
09:12 harish joined #gluster
09:26 [Enrico] joined #gluster
09:29 ppai_ joined #gluster
09:36 kotreshhr1 joined #gluster
09:43 hagarth joined #gluster
09:44 T0aD joined #gluster
09:47 sripathi joined #gluster
09:50 corretico joined #gluster
09:52 itpings ?
09:52 itpings ok i have one question if you could answer it will help my concept
09:52 itpings and that will be my next video as well
09:52 itpings so here is the scenario: i created a vol gv0 but now the problem is i want to replace it with gv1 which is created in another directory...Now if i only use vol replace would it help ? and gv0 will be replaced with gv1 and the directories as well or not ?
09:57 anrao joined #gluster
09:59 DV_ joined #gluster
10:09 nangthang joined #gluster
10:17 hgowtham joined #gluster
10:38 ppai_ joined #gluster
10:39 gem_ joined #gluster
10:46 gem__ joined #gluster
10:52 kdhananjay joined #gluster
10:53 gem joined #gluster
10:54 nbalacha joined #gluster
10:55 auzty joined #gluster
11:08 glusterbot News from newglusterbugs: [Bug 1211221] Any operation that relies on fd->flags may not work on anonymous fds <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211221>
11:32 ppai_ joined #gluster
11:38 glusterbot News from newglusterbugs: [Bug 1208819] mount ec volume through nfs, ls shows no file , but with specified filename, the file can be modified <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208819>
11:38 xiu B 12
11:44 ndevos itpings: do you mean you want to rename a volume?
11:44 soumya joined #gluster
11:45 LebedevRI joined #gluster
11:45 dusmant joined #gluster
11:48 atalur joined #gluster
11:54 kdhananjay joined #gluster
11:56 monotek1 joined #gluster
12:06 bene2 joined #gluster
12:07 soumya joined #gluster
12:08 glusterbot News from resolvedglusterbugs: [Bug 1208819] mount ec volume through nfs, ls shows no file , but with specified filename, the file can be modified <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208819>
12:14 smohan joined #gluster
12:21 theron joined #gluster
12:23 theron joined #gluster
12:36 chirino joined #gluster
12:42 dusmant joined #gluster
12:46 bene3 joined #gluster
12:53 Gill joined #gluster
13:03 nbalacha joined #gluster
13:07 T3 joined #gluster
13:09 dgandhi joined #gluster
13:14 TonyNN joined #gluster
13:16 plarsen joined #gluster
13:17 plarsen joined #gluster
13:19 atinmu joined #gluster
13:27 wkf joined #gluster
13:30 Philambdo joined #gluster
13:37 hamiller joined #gluster
13:38 glusterbot News from newglusterbugs: [Bug 1211264] Data Tiering: glusterd(management) communication issues seen on tiering setup <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211264>
13:48 Pupeno joined #gluster
13:50 dblack joined #gluster
13:52 bene2 joined #gluster
14:02 wushudoin joined #gluster
14:02 hamiller joined #gluster
14:05 squizzi joined #gluster
14:07 lpabon joined #gluster
14:08 georgeh-LT2 joined #gluster
14:12 hamiller joined #gluster
14:13 lkoranda joined #gluster
14:14 corretico joined #gluster
14:25 bennyturns joined #gluster
14:36 lkoranda joined #gluster
14:43 lkoranda joined #gluster
14:46 lkoranda joined #gluster
14:49 lkoranda joined #gluster
14:50 DV__ joined #gluster
14:53 RameshN joined #gluster
14:54 gem joined #gluster
14:56 harish joined #gluster
14:57 georgeh-LT2 joined #gluster
15:00 deepakcs joined #gluster
15:07 bene2 joined #gluster
15:07 jermudgeon joined #gluster
15:08 gem joined #gluster
15:10 chirino joined #gluster
15:12 gem joined #gluster
15:17 lkoranda joined #gluster
15:19 gem joined #gluster
15:28 xiu Hi guys, I have a volume running on a 3.3.1 cluster (yeah 3.3.1, I know I have to upgrade). When I run a ls in a directory containing 41 files it loops and take more and more memory. If I strace the ls process I get this: http://paste.geeknode.org/?4d8ed75feed88478#​sNWfQTjSdhNpybhG95MlwgyfrOAZybtDfhIDhe4ZMgg= over and over again
15:29 xiu and the ls never returns
15:29 xiu if anyone has an idea on this :)
15:32 coredump joined #gluster
15:53 jobewan joined #gluster
15:54 nangthang joined #gluster
15:56 nangthang joined #gluster
16:00 atinmu joined #gluster
16:01 ghenry joined #gluster
16:01 jermudgeon joined #gluster
16:03 DV__ joined #gluster
16:09 kanagaraj joined #gluster
16:27 JoeJulian xiu: old kernel on that box too?
16:31 Slashman joined #gluster
16:32 papamoose joined #gluster
16:34 kdhananjay joined #gluster
16:34 dusmant joined #gluster
16:36 xiu 3.2 yeah
16:51 hagarth joined #gluster
16:56 JoeJulian xiu: If you can't upgrade things to avoid bugs, you'll need to use xfs instead of ext3/4
16:59 alpha01_ joined #gluster
17:02 haomai___ joined #gluster
17:03 bene2 joined #gluster
17:06 xiu ok, so this is related to ext4?
17:07 atinmu joined #gluster
17:09 JoeJulian xiu: yes, a bug in ext4 combined with a misuse of a structure in gluster. Both have been fixed in later versions of Gluster and the kernel.
17:09 xiu ok then I have my solution :)
17:10 xiu do you have the bug id?
17:10 JoeJulian @lucky glusterfs ext4 bug
17:10 glusterbot JoeJulian: https://lwn.net/Articles/544298/
17:10 JoeJulian huh.. that's not what I was expecting.
17:11 JoeJulian @lucky glusterfs ext4 bug joejulian.name
17:11 glusterbot JoeJulian: https://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
17:11 Rapture joined #gluster
17:13 alpha01_ joined #gluster
17:15 xiu great thanks!
17:21 deepakcs_ joined #gluster
17:27 Sultanovich joined #gluster
17:38 vimal joined #gluster
17:39 theron_ joined #gluster
17:47 RameshN_ joined #gluster
17:54 dgandhi I have to setup geo-replication, but would like to pre-seed as much data is possible via sneakernet. Does the master or slave in geo-rep keep any state data about the other, or is it more like a simple rsync where I can just dump data into a new volume and geo-rep from there?
18:08 Rapture joined #gluster
18:15 madphoenix joined #gluster
18:15 edwardm61 joined #gluster
18:15 madphoenix Does glusterfs 3.5.x allow you to gracefully remove-brick (using "start") on more than one brick at a time?
18:16 madphoenix in a simple distributed volume
18:18 oxae joined #gluster
18:18 atinmu madphoenix, yes you could remove multiple bricks
18:19 madphoenix fantastic, thanks
18:19 atinmu madphoenix, pleasure
18:19 madphoenix hm, gluster reports i can't actually
18:19 madphoenix it says an earlier remove-brick task exists, either commit it or stop it before starting a new task
18:32 JoeJulian madphoenix: You can do more than one brick in the same remove-brick operation, but you cannot do multiple remove-brick operations.
18:32 madphoenix Ah, makes sense.  Thanks!
18:35 Sultanovich left #gluster
18:36 theron joined #gluster
18:46 lalatenduM joined #gluster
18:48 lexi2 joined #gluster
18:51 Alpinist joined #gluster
18:53 shaunm joined #gluster
18:55 karnan joined #gluster
19:00 chirino joined #gluster
19:06 fattaneh1 joined #gluster
19:26 rotbeard joined #gluster
19:30 fattaneh1 left #gluster
19:32 lexi2 joined #gluster
19:41 deniszh joined #gluster
19:42 bene2 joined #gluster
19:49 chirino joined #gluster
20:01 theron_ joined #gluster
20:02 theron_ joined #gluster
20:04 Rapture joined #gluster
20:26 theron joined #gluster
20:28 Pupeno joined #gluster
20:30 DV joined #gluster
20:42 diegows joined #gluster
21:04 Pupeno joined #gluster
21:36 lexi2 joined #gluster
21:36 suliba_ joined #gluster
21:38 cornusammonis joined #gluster
21:54 jbrooks joined #gluster
22:00 fubada purpleidea: i pulled in latest master of purpleidea/puppet-gluster but now Im getting Invalid relationship type errors: https://gist.github.com/ano​nymous/9350fa8916df731bf0b0
22:00 fubada on the gluster servers
22:01 fubada this wasnt happening prior to me syncing up with latest master
22:01 fubada have you seen this?
22:03 fubada https://github.com/purpleid​ea/puppet-gluster/issues/43
22:07 badone_ joined #gluster
22:07 purpleidea fubada: replied there
22:08 fubada thanks
22:08 fubada i did purge the resources
22:08 fubada purpleidea: i ran node deactivate
22:09 purpleidea fubada: still?
22:09 fubada yah
22:09 fubada also did rm -rf /var/lib/puppet/tmp/gluster/brick/*
22:09 purpleidea fubada: try the previous version and see what's up. look at the facts...
22:09 purpleidea fubada: i gotta run right now, but debug it a bit, and let me know!
22:09 fubada thanks
22:10 purpleidea fubada: fwiw this patch changed a few things, but as long as you have a clean cluster (no old state) it should work: https://github.com/purpleidea/puppet-gluster/c​ommit/3dec43724737996dd8ee12cd5f567f3234510b37
22:10 fubada so i have to remove the bricks from my gluster boxes?
22:13 purpleidea fubada: no no, this is a puppet issue
22:14 fubada okay i purged all under /var/lib/puppet/tmp/gluster
22:14 fubada and deactivated all nodes
22:14 fubada unfortunately same issue on all 4
22:14 purpleidea fubada: i really gotta go, please investigate in a dev environment such as vagrant to see if you still have the issue with git master. if not, then you know it's something in your cluster.
22:14 fubada ok
22:14 purpleidea /afk
22:16 mike2512 joined #gluster
22:17 mike2512 hey guys... i am trying to install gluster on 2 centos vms  - centos 6.6
22:17 mike2512 Requires: libgfapi.so.0(GFAPI_3.4.0)(64bit)
22:17 mike2512 i have followed the procedures here:  http://www.gluster.org/community/documen​tation/index.php/Getting_started_install
22:19 JoeJulian Er... That probably could have been done a lot better. Let me edit that page.
22:21 mike2512 JoeJulian: yeah... the page is not updated.. the commands for centos are for an older version... but still.. with the rpms that i get... i can't install. .... this is a bad thing.. especially that now i want to see how good the product is... and from the start i get an error :P
22:22 cornus_ammonis joined #gluster
22:22 mike2512 is centos 6.6 supported? or i should move to 7 ?
22:22 mike2512 i have installed also the development tools
22:23 JoeJulian There you go mike2512
22:23 JoeJulian I've fixed it.
22:24 mike2512 already?!
22:24 mike2512 wow
22:24 JoeJulian It wasn't hard. Just had to mostly delete a bunch of crap.
22:25 mike2512 JoeJulian: you are an effing STAR
22:26 Gill joined #gluster
22:26 soumya joined #gluster
22:27 mike2512 thanks JoeJulian
22:31 JoeJulian You're welcome
22:35 T3 joined #gluster
22:35 dgandhi joined #gluster
22:41 Pupeno joined #gluster
22:41 Pupeno joined #gluster
22:42 JoeJulian ... and then I see that the static docs on gluster.org have the same garbage. ... why do we have to have the same content duplicated in a static page that nobody will ever edit? There's a reason wikis exist.
22:44 mike2512 well... what is the current documentation?
22:44 JoeJulian Since I just changed it, the wiki.
22:44 JoeJulian Now, I guess, I'm expected to change it again through a git commit.
22:45 JoeJulian ... not going to happen. I've got things to do.
22:46 mike2512 :)
22:50 Pupeno_ joined #gluster
22:51 theron_ joined #gluster
23:00 Pupeno joined #gluster
23:00 Pupeno joined #gluster
23:05 gildub joined #gluster
23:26 fubada purpleidea: found the issue, replied on github
23:27 jbrooks joined #gluster
23:30 badone__ joined #gluster
23:32 Pupeno_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary