Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 wkf joined #gluster
00:14 wkf joined #gluster
00:24 jbrooks joined #gluster
00:29 RicardoSSP joined #gluster
00:29 RicardoSSP joined #gluster
00:36 jaank joined #gluster
01:24 ralala joined #gluster
01:26 edwardm61 joined #gluster
01:32 ralala joined #gluster
01:40 ralala joined #gluster
01:42 bene2 joined #gluster
02:29 bharata-rao joined #gluster
02:45 jaank joined #gluster
02:51 gem joined #gluster
02:51 harish joined #gluster
03:09 kaushal_ joined #gluster
03:11 wkf joined #gluster
03:14 atalur joined #gluster
03:16 T3 joined #gluster
03:21 elico joined #gluster
03:30 nangthang joined #gluster
03:36 suman_d joined #gluster
03:36 MugginsM joined #gluster
03:38 jmarley joined #gluster
03:49 itisravi joined #gluster
03:59 rjoseph joined #gluster
04:02 atinmu joined #gluster
04:07 shubhendu joined #gluster
04:09 aravindavk joined #gluster
04:28 kdhananjay joined #gluster
04:30 karnan joined #gluster
04:31 jiffin joined #gluster
04:31 ppai joined #gluster
04:35 glusterbot News from newglusterbugs: [Bug 1190551] ipv6 enabled on the peer, but dns resolution fails with ipv6 and gluster does not fall back to ipv4 <https://bugzilla.redhat.com/show_bug.cgi?id=1190551>
04:36 soumya_ joined #gluster
04:36 aravindavk joined #gluster
04:37 spandit joined #gluster
04:39 ndarshan joined #gluster
04:43 gem joined #gluster
04:44 bene2 joined #gluster
05:00 bala joined #gluster
05:03 SOLDIERz_ joined #gluster
05:09 rafi joined #gluster
05:09 RameshN joined #gluster
05:10 suman_d_ joined #gluster
05:10 anoopcs joined #gluster
05:10 kshlm joined #gluster
05:15 prasanth_ joined #gluster
05:21 harish joined #gluster
05:24 dusmantkp_ joined #gluster
05:30 sakshi joined #gluster
05:31 anoopcs joined #gluster
05:32 kanagaraj joined #gluster
05:34 soumya_ joined #gluster
05:36 Manikandan joined #gluster
05:37 dusmant joined #gluster
05:38 shubhendu joined #gluster
05:39 anil joined #gluster
05:51 maveric_amitc_ joined #gluster
05:52 ramteid joined #gluster
05:59 overclk joined #gluster
05:59 Manikandan joined #gluster
06:03 shylesh__ joined #gluster
06:19 raghu` joined #gluster
06:29 Philambdo joined #gluster
06:30 ppai joined #gluster
06:43 social joined #gluster
06:44 maveric_amitc_ joined #gluster
06:51 plarsen joined #gluster
06:51 atinmu joined #gluster
07:02 mbukatov joined #gluster
07:05 ndarshan joined #gluster
07:11 shubhendu joined #gluster
07:17 atinmu joined #gluster
07:22 jtux joined #gluster
07:42 bala joined #gluster
07:42 rodrigoc joined #gluster
07:44 kanagaraj joined #gluster
07:48 LebedevRI joined #gluster
07:49 [Enrico] joined #gluster
07:52 lalatenduM joined #gluster
07:52 ppai joined #gluster
08:01 mbukatov joined #gluster
08:03 ndarshan joined #gluster
08:03 shubhendu joined #gluster
08:06 kanagaraj joined #gluster
08:17 gothos joined #gluster
08:24 kovshenin joined #gluster
08:25 soumya_ joined #gluster
08:32 social joined #gluster
08:35 suman_d joined #gluster
08:36 glusterbot News from newglusterbugs: [Bug 1190581] Detect half executed operations on disperse volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1190581>
08:37 tanuck joined #gluster
08:37 fsimonce joined #gluster
08:38 hybrid512 joined #gluster
08:39 anigeo joined #gluster
08:45 nishanth joined #gluster
08:53 Telsin joined #gluster
08:58 coredump joined #gluster
08:58 social joined #gluster
09:08 atalur joined #gluster
09:15 suman_d_ joined #gluster
09:17 sdebnath__ joined #gluster
09:22 anoopcs joined #gluster
09:31 overclk joined #gluster
09:49 ndarshan joined #gluster
09:49 shubhendu joined #gluster
09:51 drscream joined #gluster
09:51 ppai joined #gluster
09:52 drscream Hello, is it somehow possible to extend a glusterfs without rebalance the data - basically I would like to extend a gluster to have more space available and the two bricks i'm adding could be replaced by them own? Background i've a glusterfs volume ~200TB and need to extend it, so rebalance will take a long (unknown) time.
09:59 overclk joined #gluster
10:07 ndevos drscream: yes, you can - and there are other users that extend volumes but do not rebalance for the same reason
10:07 drscream could you explain me how :)
10:07 lalatenduM joined #gluster
10:07 ndevos drscream: you install your new storage servers, format the disks and mount them as your bricks
10:07 ndevos drscream: then, you do a 'gluster peer probe $new_server' and 'gluster volume add-brick $your_volume $new_server:/path/to/brick/data'
10:07 ndevos and I think thats all
10:07 ndevos at least, for the current releases, maybe older releases require some fix-layout command
10:08 drscream mhh i did that but the additional space isn't available - i'm running version 3.2.5
10:08 mbukatov joined #gluster
10:09 ndevos oh, wow, I have no idea how to do that on 3.2.5, you should really consider to upgrade
10:09 liquidat joined #gluster
10:09 ndevos 3.4 is the oldest version that still gets bugfixes...
10:10 drscream the question is a upgrade possible without any problems :D
10:10 ricky-ticky joined #gluster
10:15 lalatenduM joined #gluster
10:23 tanuck joined #gluster
10:31 ppai joined #gluster
10:32 anoopcs joined #gluster
10:33 anoopcs joined #gluster
10:33 drscream ndevos: i will try an upgrade - do you know if your way is also possible if i've replication: https://gist.github.com/drscream/fc7e31afc364a7d97c13
10:35 ndevos drscream: yes, you will need to add multiple bricks (no. of replicas) at the same time, see https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html-single/Administration_Guide/index.html#Expanding_Volumes
10:35 drscream thanks
10:36 glusterbot News from newglusterbugs: [Bug 1167012] self-heal-algorithm with option "full" doesn't heal sparse files correctly <https://bugzilla.redhat.com/show_bug.cgi?id=1167012>
10:50 overclk joined #gluster
10:51 shubhendu joined #gluster
10:52 ndarshan joined #gluster
10:56 suliba joined #gluster
11:01 suliba joined #gluster
11:02 itpings joined #gluster
11:02 itpings hi all
11:02 yosafbridge joined #gluster
11:02 itpings i need some help
11:02 itpings regarding gluster
11:02 itpings anyone care to solve ?
11:03 LordFolken whats the issue
11:03 LordFolken I'm not a gluster guru but I'll have a crack
11:03 itpings thanks for asking
11:04 itpings i am trying to install gluster on centos 7
11:04 itpings i was able to do it twice successfully
11:04 itpings but after restart all goes wild
11:05 tanuck joined #gluster
11:05 itpings i receive strange error out of which the most common one is mount failed
11:05 itpings firewall is stopped
11:05 itpings services all running fine
11:05 LordFolken what version?
11:05 itpings latest
11:05 itpings let me make sure
11:06 itpings 3.6.2
11:06 LordFolken what errors are in the log
11:06 LordFolken and how is your volume setup
11:06 itpings replica
11:07 itpings when ever i try to mount it gives error
11:07 itpings if you want i can paste the log her
11:07 tanuck joined #gluster
11:07 LordFolken sure
11:07 LordFolken use pastebin
11:07 LordFolken or similar if you can
11:08 itpings its fine its not a production server yet :)
11:08 itpings [2015-02-09 11:05:04.827493] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
11:08 itpings [2015-02-09 11:05:04.827538] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/glkdata/brick/gv0/)
11:08 itpings [2015-02-09 11:05:04.827689] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
11:08 glusterbot itpings: ('s karma is now -57
11:08 itpings [2015-02-09 11:05:04.827705] I [fuse-bridge.c:5599:fini] 0-fuse: Unmounting '/mnt/gluster/'.
11:08 itpings [2015-02-09 11:05:04.827896] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down
11:08 glusterbot itpings: ('s karma is now -58
11:08 itpings /var/log/glusterfs/mnt-gluster-.log (END)
11:08 itpings oo
11:09 itpings apologies
11:09 itpings i should have fpaste
11:10 LordFolken are you mounting the volume name and not the brick name
11:10 itpings hers what i am mounting
11:11 itpings mount.glusterfs backup1:/glkdata/brick/gv0/ /mnt/gluster/
11:11 itpings and hosts file has the entry so brick name is fine
11:12 itpings at the moment i am on the localhost
11:18 itpings so any idea ?
11:18 LordFolken what is the name of your volume
11:18 itpings gv0
11:18 LordFolken mount -t glusterfs brick1:/qv0 /mnt
11:20 itpings thanks
11:20 itpings it works
11:20 itpings thanks a lot
11:20 ppai joined #gluster
11:20 itpings what was wrong with my mount ?
11:21 T0aD joined #gluster
11:25 swebb joined #gluster
11:25 itpings ..
11:25 ccha joined #gluster
11:26 itpings could you please advise what was wrong with my mount ?
11:26 itpings also do i need to give brick1:/gv0 in fstab to make it permenant on boot
11:29 kdhananjay joined #gluster
11:30 elico joined #gluster
11:30 LordFolken you need to mount brick:volume
11:30 LordFolken sorry brick:/volume
11:30 LordFolken without the /glkdata/brick/
11:31 itpings yeah i did tht
11:31 itpings but could you advise what was the wrong with my mount
11:31 itpings when i was giving it the full path
11:32 gem joined #gluster
11:32 kshlm joined #gluster
11:37 glusterbot News from newglusterbugs: [Bug 1185950] adding replication to a distributed volume makes the volume unavailable <https://bugzilla.redhat.com/show_bug.cgi?id=1185950>
11:43 itpings so after restart again mount failed
11:43 itpings thats what the problem is
11:44 itpings ok working
11:44 itpings problem was with glusterd daemon
11:44 itpings it fails to startup even i enabled it with systemctl
11:47 hybrid512 joined #gluster
11:49 kdhananjay joined #gluster
11:49 itpings Thanks a lot LordFolken
11:49 itpings appreciated
11:52 itpings gluster is not starting up its gving error Failed to start GlusterFs
11:53 itpings i used both systemctl enable glusterfsd
11:53 itpings i used both systemctl enable glusterd
11:53 itpings also auto mount fails
11:58 hagr itpings: if you're using /etc/fstab - have a look inside the file and see if the path is correct
11:58 itpings hi hagr
11:58 itpings everything is good
11:59 itpings brick:/vol /mnt/gluster glustefs defaults 0 0
12:00 hagr maybe your mount comes before the brick or volume is ready.. sorry, I' new to gluster so that's as far as I can help.
12:01 itpings its ok hagr
12:01 itpings you are trying to help
12:01 itpings thanks for that :)
12:01 hagr oh, check spelling in your previous line - glustefs != glusterfs
12:01 itpings sure
12:01 itpings just a min
12:02 masterzen joined #gluster
12:02 itpings no alls good
12:03 itpings one funny thing is that when i write exportfs -ar nothing shows up
12:04 hybrid512 joined #gluster
12:05 hagr is this on the "brick" server? what does showmount -e give you there?
12:06 itpings nothing
12:06 itpings only Backup1.local:
12:07 T3 joined #gluster
12:07 hybrid512 joined #gluster
12:08 mbukatov joined #gluster
12:08 hagr itpings: it should give you the volume if you're running it. do you have an existing nfs server running?
12:09 itpings yes i do
12:09 itpings now its failing to mount
12:09 itpings i think i will check with centos 6.6
12:10 itpings and will come back agin
12:10 itpings thanks a lot hagr and LordFolken
12:10 itpings appreciated
12:10 hybrid512 joined #gluster
12:10 hagr itpings: I'm not sure if glusterfs can work (easily) with an existing nfs server. try disabling nfs and see if it works then
12:10 itpings ok hagr
12:10 itpings will do it
12:10 itpings thanks a lot
12:12 itpings disabled and now reboot
12:13 itisravi_ joined #gluster
12:14 pdrakewe_ joined #gluster
12:15 itpings ty all
12:15 itpings still unsolved
12:15 itpings will be back
12:21 shubhendu joined #gluster
12:24 calisto joined #gluster
12:24 deniszh joined #gluster
12:30 t0ma left #gluster
12:33 ganja\ joined #gluster
12:36 ganja\ Hello all. I am having a small problem. I am running a distributed and replicated gluster over 6 bricks, 2 servers. One of the bricks has failed, and I can not figure out how to replace it. I can not remove the old brick, with the error incorrect brick count, which is understandable. But I have not managed to run replace brick or anything else smart either.
12:36 ganja\ Anyone who could give me a pointer maybe?
12:37 ganja\ My other bricks has room enough for me to remove the whole brick-pair, but I can not find any way to do that either.
12:39 ppai joined #gluster
12:41 ildefonso joined #gluster
12:47 Slashman joined #gluster
12:50 ganja\ nevermind, it self-healed. I had an other problem. My 2nd server had the wrong IP as peer for some reason. Most likely my own fault when setting it up.
12:51 ira joined #gluster
12:55 mbukatov joined #gluster
12:56 chirino joined #gluster
12:58 ppai joined #gluster
13:01 anoopcs joined #gluster
13:02 dgandhi joined #gluster
13:02 deniszh1 joined #gluster
13:03 Slashman joined #gluster
13:10 deniszh joined #gluster
13:10 bene2 joined #gluster
13:11 cyberbootje joined #gluster
13:11 hagarth joined #gluster
13:16 [o__o] joined #gluster
13:22 bala joined #gluster
13:24 R0ok_ joined #gluster
13:33 elico joined #gluster
13:40 wkf joined #gluster
13:46 suman_d_ joined #gluster
13:49 lpabon joined #gluster
14:02 mbukatov joined #gluster
14:10 shaunm joined #gluster
14:21 cyberbootje joined #gluster
14:24 ricky-ticky2 joined #gluster
14:27 coredump joined #gluster
14:28 plarsen joined #gluster
14:29 calisto joined #gluster
14:29 virusuy joined #gluster
14:32 georgeh-LT2 joined #gluster
14:32 bennyturns joined #gluster
14:39 edwardm61 joined #gluster
14:40 awerner joined #gluster
14:43 theron joined #gluster
14:44 dbruhn joined #gluster
14:45 lalatenduM joined #gluster
14:46 jmarley joined #gluster
14:49 Gill joined #gluster
14:57 mbukatov joined #gluster
15:01 liquidat joined #gluster
15:18 malevolent joined #gluster
15:19 wushudoin joined #gluster
15:28 sdebnath__ joined #gluster
15:35 jobewan joined #gluster
15:36 B21956 joined #gluster
15:41 suman_d_ joined #gluster
15:46 MacWinner joined #gluster
15:48 sage_ joined #gluster
15:49 ira joined #gluster
15:55 wkf joined #gluster
15:56 plarsen joined #gluster
16:00 suman_d_ joined #gluster
16:01 lmickh joined #gluster
16:05 shubhendu joined #gluster
16:10 _Bryan_ joined #gluster
16:11 soumya_ joined #gluster
16:15 crushkill is there a release schedule for updates to latest?
16:15 maveric_amitc_ joined #gluster
16:19 crushkill i.e. when can i expect a patch to 3.6.2
16:19 crushkill *update
16:21 wkf joined #gluster
16:23 T3 joined #gluster
16:23 RameshN joined #gluster
16:28 crushkill joined #gluster
16:29 MacWinner joined #gluster
16:47 RameshN joined #gluster
16:53 jbrooks joined #gluster
16:54 sdebnath__ joined #gluster
16:59 vasis joined #gluster
17:01 vasis Hello! I have corosync + pacemaker working on two virtual machines and in an attempt to make sure my apache root directory is identical on both servers, I am trying to set up glusterfs...
17:02 vasis I have added the peers and created/started the volume on both virtual servers
17:03 suman_d_ joined #gluster
17:03 vasis I also used mount.glusterfs to mount the volume on both ends as follows:
17:04 vasis from server1: mount.glusterfs server2:/TEST /mnt/test
17:04 vasis and from server2: mount.glusterfs server1:/TEST /mnt/test
17:05 JoeJulian ~pasteinfo | vasis
17:05 glusterbot vasis: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:05 vasis The problem is every time I try to access the /mnt/test folder my session ends up broken on both ends. Yes, I will do:
17:07 vasis http://pastebin.com/5hVj09rh
17:07 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:07 vasis this is from server1
17:08 JoeJulian Ok, good. You had me worried when you said you created and started the volume on both servers.
17:09 JoeJulian What does "my session ends up broken on both ends" mean?
17:09 vasis http://fpaste.org/183389/
17:09 vasis it's the same on server2
17:10 vasis @JoeJulian: It means I have to kill my tmux session in order to get back to the console...
17:11 JoeJulian Ah, ok.
17:11 vasis Am I getting the mount.glusterfs command backwards?
17:11 JoeJulian Have you looked at the client log for clues? In your case /var/log/glusterfs/mnt-test.log
17:11 vasis But the thing is I tried the other way as well (as in from server1: mount.glusterfs server1:/TEST /mnt/test)
17:12 vasis hmmm I was tailing the log under /var/log/glusterfs/bricks while doing it...
17:12 JoeJulian My guess is network.
17:12 JoeJulian I doubt the problem is getting as far as the bricks.
17:13 vasis Well I did tcpdump against the iface to see what's going on there... but network seems to be working...
17:14 JoeJulian No firewalls?
17:15 vasis I also had a look in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log. The only thing there which is kind of wrong is this: "reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1018)"
17:15 vasis No firewall in the way no... This is a test environment on a VM
17:15 vasis And iptables is disabled for this test
17:16 JoeJulian /var/log/glusterfs/mnt-test.log is probably still the most useful log for this
17:16 vasis I must be doing something stupid... I am sure. I just don't know what yet... :(
17:16 JoeJulian selinux?
17:16 gem joined #gluster
17:17 JoeJulian though that shouldn't cause a hang...
17:18 vasis @JoeJulian: While I was adding the peers and creating/starting the volume I was tailing /var/log/glusterfs/etc-glusterfs-glusterd.vol.log... Nothing there other than the WARN that happens every now and again...
17:19 vasis Maybe if I fix those the problem will go away even though that will be quite random if it happens...
17:19 John_HPC joined #gluster
17:19 JoeJulian Right. That's the management daemon's log.
17:19 vasis Is this "W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1018)" serious?
17:19 glusterbot vasis: That's just a spurious message which can be safely ignored.
17:19 rcampbel3 joined #gluster
17:20 vasis ok, I thought as much, thanks!
17:20 JoeJulian The only time that's going to matter is during the initial mount when the client retrieves the volume definition from the management daemon. After that it connects directly to the brick daemons.
17:20 JoeJulian Since the mount succeeds, that's not it.
17:20 vasis hmmm OK.
17:21 JoeJulian I'm going to ask you one last time about the client log then I've got to get back to work.
17:22 vasis I am not able to do anything on the /mnt/VASILIS.. ls breaks the session, trying to touch a fiule has the same result... I am really confused here. It looks like the OS does not like it when I try to do anything with it.
17:22 vasis what's the client log?
17:23 JoeJulian When you sait you mounted the volume on /mnt/test, it was as I specified twice now, /var/log/glusterfs/mnt-test.log. Now that you're saying it's /mnt/VASILIS, it'll be /var/log/glusterfs/mnt-VASILIS.log
17:24 vasis sorry, I tried to make this easier but I caused confusion... yes.. Whatever I pasted as the output in the fpaste is correct, I meant /mnt/VASILIS
17:25 vasis that's the actual mount point
17:25 JoeJulian Oh, snap!
17:25 JoeJulian You're mounting the client over the top of the brick. That's a big no-no.
17:25 vasis I thought I am doing something stupid!!!!
17:26 JoeJulian Bricks are for the use of glusterfs only.
17:26 JoeJulian Mount your client elsewhere and only access your volume through the client mount.
17:26 vasis So I will need a different partition if I got it right, yes?
17:27 JoeJulian Not necessarily.
17:27 JoeJulian A "brick" is a path assigned to glusterfs.
17:27 mbukatov joined #gluster
17:27 vasis ohhhh I see...
17:27 JoeJulian Typically, it's a mounted filesystem dedicated to storage, but not a requirement.
17:27 vasis so making a different folder under /mnt will do it I suppose
17:28 JoeJulian You've already built your volume, just mount the client somewhere else, /mnt/VASILIS_client
17:28 JoeJulian for instance
17:29 vasis yes, I got it now. Thanks for all your help, I'll try that but I am almost sure that's what it is.
17:29 JoeJulian You're welcome.
17:29 John_HPC JoeJulian: Any other thoughts on duplicate directories/files? I tried a full heal and a fix layout. http://paste.ubuntu.com/10145165/ It appears a "linkto" file exists on 3/4 the actual file resides on 5/6. Should 1/2 have that same linto file? If so, looks like 1/2 aren't properly synced with the other servers.
17:30 JoeJulian @lucky dht misses are expensive
17:30 glusterbot JoeJulian: http://joejulian.name/blog/dht-misses-are-expensive/
17:30 JoeJulian John_HPC: ^ Go read the first half of that page.
17:30 JoeJulian It'll teach you all about DHT and those link files.
17:32 John_HPC thanks
17:35 plarsen joined #gluster
17:40 John_HPC btw, nice monty python reference.
17:40 JoeJulian Thanks. :D
17:41 shubhendu joined #gluster
17:42 JoeJulian So... The dht.linkto file only exists if the hash for the filename points to the brick assigned that hash region. That linkto file's extended attributes will point to where the file *actually* is.
17:43 kbyrne joined #gluster
17:47 JoeJulian The link files should be mode 1000, size 0, and have an extended attribute trusted.glusterfs.dht.linkto
17:48 sdebnath__ joined #gluster
17:49 B21956 joined #gluster
17:51 maveric_amitc_ joined #gluster
17:54 dbruhn ugh, rc.local doesn't wait to run after boot anymore... there goes what it took to get the brick servers to mount the file system to themselves.
17:54 dbruhn on centos 7
17:55 John_HPC JoeJulian: yep. It *should* be on brick3 on gluster3, but it actually resides on brick4 on gluster5. So that's working correctly.
17:56 crushkill left #gluster
18:04 dbruhn Anyone have any ideas on how to get auto mounting a volume to work in cents 7?
18:04 JoeJulian I just use _netdev and have never had a problem.
18:05 dbruhn _netdev doesn't seem to effect gluster
18:05 JoeJulian No, _netdev affects mount
18:05 JoeJulian Well, mountall
18:06 dbruhn yep I understand, I am saying _netdev has a static list of supported filesystems, and gluster isn't one of them.
18:06 JoeJulian The process mounts all disks without _netdev and without known network dependent disks. Then after the network starts, it comes back for _netdev and nfs,cifs,etc.
18:06 JoeJulian No, it's in *addition* to the static list.
18:07 dbruhn localhost:/gv02 /mnt/gv02 glusterfs defaults,_netdev 0 0
18:07 dbruhn this is my fstab entry
18:07 dbruhn and it still doesn't work
18:07 dbruhn but if I do a mount -all after boot it works fine
18:07 elico joined #gluster
18:08 dbruhn error logs indicate that it's trying to mount it before the networking is up
18:08 kovshenin joined #gluster
18:11 Rapture joined #gluster
18:13 suman_d_ joined #gluster
18:13 JoeJulian I'm not trying to be argumentative, just informative. Clearly it's not working as expected but here's why it *should*. I'm hoping that this will lead to other trains of thought: https://github.com/systemd/systemd/blob/e0ec8950935ce587935e299c22232fbf4a2664c9/src/core/mount.c#L73
18:14 tdasilva joined #gluster
18:14 dbruhn No I appreciate the feedback, improves my understanding of how it works.
18:15 JoeJulian Seems to only matter if running directly from systemd.
18:18 JoeJulian My expectation is that, for some reason, glusterd or glusterfsd(s) aren't yet listening even though the network (theoretically) is up.
18:20 B21956 left #gluster
18:23 B21956 joined #gluster
18:23 semiosis dbruhn: what kind of exotic networking are you doing?  ethernet bonding?  bridging?
18:23 JoeJulian dbruhn: Looks like kkeithley already ran in to this before https://lists.fedoraproject.org/pipermail/devel/2013-July/185809.html
18:26 semiosis ndevos: any update on NSR from Friday?  re: https://botbot.me/freenode/gluster/2015-02-04/?msg=31230968&amp;page=4
18:28 lpabon joined #gluster
18:38 dbruhn hey sorry, was on the phone.
18:38 dbruhn I am running teaming
18:38 syntaxerrors Does ordering still matter when creating a distribute only volume over multiple bricks and gluster servers? The docs state that it's important 'as of the most current release of this writing,gluster3.3' which is obviously not 3.6.
18:39 syntaxerrors Not sure if it does any balancing/splaying automatically now.
18:39 JoeJulian It's still important.
18:40 dbruhn semiosis, I am running teaming active/passive.
18:41 semiosis never heard of it
18:41 dbruhn Oddly I have one pair of servers in a gluster volume and everything is working as expected, and a second pair that is not.
18:41 semiosis but it's always some exotic networking thing when people have this problem
18:41 dbruhn teaming is the new bonding via redhat, cent
18:42 dbruhn seems the new implementations of systemd doesn't make rc.local wait to run either
18:42 JoeJulian syntaxerrors: This is on the list for 4.0 http://www.gluster.org/community/documentation/index.php/Features/better-brick-mgmt
18:43 dbruhn which is how I solved this issue on cent6 with infiniband
18:43 JoeJulian Oh, infiniband! Does that not count as network?
18:43 dbruhn I am just not running it anymore.
18:44 JoeJulian Damn. Was hoping that was it. Would give me some idea where to go next.
18:44 syntaxerrors JoeJulian: thanks for the clarification. What happens if it's not ordered correctly but otherwise both gluster servers have exactly identical number of volumes? Since it's distrubuting them evenly to all volumes you would think it wouldn't matter? Sorry just trying to figure out the importance of it.
18:44 SOLDIERz_ joined #gluster
18:45 dbruhn The company I was working for that I was using IB and gluster for sold, this is new gig, new systems.
18:50 JoeJulian dbruhn: What if you set volfile-max-fetch-attempts={some huge number}
18:50 JoeJulian ~brick_order | syntaxerrors
18:50 glusterbot syntaxerrors: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
18:51 JoeJulian That's really the only thing that brick order matters for (unless you're using stripe)
18:51 syntaxerrors JoeJulian: got it. Since I'm not doing replica's, only a simple distribute that should not matter from what you're saying.
18:51 JoeJulian Nope
18:52 syntaxerrors JoeJulian: cool.
18:54 lpabon joined #gluster
19:16 lpabon joined #gluster
19:20 ThatGraemeGuy joined #gluster
19:51 rcampbel3 Any gluster-specific limitations or best practices for max # of subdirs in a directory? I'm built on top of xfs, that's supposed to handle millions, but I've seen recommendations of keeping that # to the tens of thousands. Does glusterfs have any of its own performance issues with large number of files or dirs or objects in general within a single dir?
19:59 semiosis rcampbel3: if you try to list a directory with lots of entries you may not like the performance
19:59 semiosis if you dont need to enumerate directory entries then you should be fine
20:00 semiosis but afaik glusterfs doesnt have any limit
20:00 semiosis and i mean any entries, files or subdirs
21:07 ilbot3 joined #gluster
21:07 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
21:07 MacWinner joined #gluster
21:29 MugginsM joined #gluster
21:38 rcampbel3 joined #gluster
21:48 ccverak joined #gluster
21:48 ccverak hi everyone
21:48 wkf joined #gluster
21:48 ccverak anybody with experience on glusterfs
21:49 dgandhi joined #gluster
21:49 purpleidea nope, not here :P
21:49 purpleidea ~hi | ccverak
21:49 glusterbot ccverak: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:50 ccverak Hi purple idea
21:51 ccverak i was wondering if you can help me up with some problem I'm having with gluster
21:51 purpleidea ccverak: please don't direct message channel users unless they have previously consented
21:52 ccverak HIi you all, look, I'm looking for someone with experience on gluster that can help me up here: https://stackoverflow.com/questions/28368751/persistent-storage-for-apache-mesos
21:52 wkf joined #gluster
21:53 ccverak sorry purpleidea, I'm new on this channel...
21:56 semiosis we generally like to keep everything in channel, because it is logged, and can be helpful for others later on
21:56 semiosis we usually only use PM for private details or off topic chats
22:08 plarsen joined #gluster
22:15 syntaxerrors I was testing migration of data from a brick on my first gluster hosts to a new brick created on my second gluster server host and received an error about replace-brick being deprecated
22:15 syntaxerrors dd
22:16 syntaxerrors s it still wise to use replace-brick?
22:17 syntaxerrors gluster volume replace-brick test-volume server3:/exp3  server5:/exp5 start etc.
22:17 semiosis sometimes
22:17 semiosis oh, deprecated?
22:17 semiosis whats the new shiny?
22:17 syntaxerrors yes ;)
22:18 syntaxerrors Should I be using a newer command to migrate from one brick to another? Just asking since I see reference to the 'depricated' command in https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_managing_volumes.md#migrating-volumes
22:27 gildub joined #gluster
22:38 ilbot3 joined #gluster
22:38 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
22:48 lalatenduM joined #gluster
22:53 plarsen joined #gluster
22:59 badone_ joined #gluster
23:01 syntaxerrors semiosis: dug this up which explains the reasoning. http://www.gluster.org/pipermail/gluster-users/2012-October/011502.html
23:03 semiosis nice
23:04 shaunm joined #gluster
23:06 siel joined #gluster
23:10 diegows joined #gluster
23:13 h4rry hi all anyone familiar with the gluster puppet module internals here? I ran into a problem that is registering a puppet "change" on my many hosts when there isn't a change
23:13 h4rry apparently some files are touched:    http://pastebin.com/aPJVjpeA  every puppet run
23:13 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
23:15 h4rry Here's the output: http://paste.ubuntu.com/10149680/  -- it's making my foreman reports report a change, looks like an error
23:15 h4rry thing is I have dozens of hosts soon to be more it looks like mass error in my puppet reports
23:18 lpabon joined #gluster
23:38 jaank joined #gluster
23:46 jmarley joined #gluster
23:47 JoeJulian That would be purpleidea... Hey James! ^^
23:50 purpleidea h4rry: hey, this is unfortunately a known issue, which happens for some rarer configurations... a patch has not been merged yet because i was a bit busy and i'm oot. if it's not fixed by the middle of next week please ping me again. the goodnews is that it's not dangerous, just slightly annoying for reports :( sorry
23:51 fubada ah! someone else :)
23:51 h4rry @purpleidea thanks
23:51 purpleidea h4rry: but if you want to take a stab at it, please do :) it's slightly non-trivial unfortunately.
23:51 purpleidea fubada: lol, i know, right!
23:51 fubada i tried and failed :P
23:52 purpleidea fubada: best way to find out if you actually have users. introduce a small, but not dangerous bug and wait for the reports ;)  <-- just kidding, that's not what happened
23:52 glusterbot purpleidea: <'s karma is now -13
23:52 purpleidea :P
23:52 purpleidea ++
23:52 purpleidea <++
23:52 glusterbot purpleidea: <'s karma is now -12
23:52 purpleidea <++++++
23:52 glusterbot purpleidea: <++++'s karma is now 1
23:52 purpleidea lol
23:52 purpleidea lol--
23:52 glusterbot purpleidea: lol's karma is now -1
23:54 ron-slc joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary