Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 TehStig joined #gluster
00:15 gildub joined #gluster
00:25 _Bryan_ joined #gluster
00:56 plarsen joined #gluster
01:00 calisto joined #gluster
01:08 gstock_ joined #gluster
01:14 bala joined #gluster
01:18 Pupeno_ joined #gluster
01:20 msciciel joined #gluster
01:29 TehStig joined #gluster
01:45 calisto1 joined #gluster
02:10 TehStig joined #gluster
02:14 haomaiwa_ joined #gluster
02:39 TehStig joined #gluster
02:51 sac_ joined #gluster
02:57 hagarth joined #gluster
03:04 meghanam joined #gluster
03:05 bharata-rao joined #gluster
03:05 bharata_ joined #gluster
03:18 jaank joined #gluster
03:42 kanagaraj joined #gluster
03:42 soumya__ joined #gluster
03:43 harish joined #gluster
03:45 lpabon joined #gluster
03:45 RameshN joined #gluster
03:51 saurabh joined #gluster
03:51 TehStig joined #gluster
03:56 kumar joined #gluster
03:57 atinmu joined #gluster
04:00 khelll joined #gluster
04:02 siel joined #gluster
04:04 itisravi joined #gluster
04:05 hagarth joined #gluster
04:18 TehStig joined #gluster
04:24 AaronGr joined #gluster
04:31 kshlm joined #gluster
04:38 ndarshan joined #gluster
04:43 rafi joined #gluster
04:45 anoopcs joined #gluster
04:48 hagarth joined #gluster
04:48 sahina joined #gluster
04:55 spandit joined #gluster
04:58 Philambdo joined #gluster
04:58 anoopcs joined #gluster
04:58 dusmant joined #gluster
05:00 lalatenduM joined #gluster
05:01 ppai joined #gluster
05:08 bala joined #gluster
05:08 rjoseph joined #gluster
05:16 zerick joined #gluster
05:32 meghanam joined #gluster
05:35 poornimag joined #gluster
05:37 glusterbot News from newglusterbugs: [Bug 1172458] fio: end_fsync failed for file FS_4k_streaming_writes.1.0 unsupported operation <https://bugzilla.redhat.com/show_bug.cgi?id=1172458>
05:46 ndarshan joined #gluster
05:50 atalur joined #gluster
05:54 ramteid joined #gluster
05:54 raghu` joined #gluster
06:02 karnan joined #gluster
06:03 meghanam joined #gluster
06:03 meghanam_ joined #gluster
06:14 kdhananjay joined #gluster
06:21 Philambdo joined #gluster
06:22 karnan joined #gluster
06:23 elico joined #gluster
06:24 TehStig joined #gluster
06:25 karnan joined #gluster
06:31 dusmant joined #gluster
06:35 soumya__ joined #gluster
06:35 ndarshan joined #gluster
06:37 glusterbot News from resolvedglusterbugs: [Bug 1038247] Bundle Sources and javadocs in mvn build <https://bugzilla.redhat.com/show_bug.cgi?id=1038247>
06:38 anoopcs joined #gluster
06:40 Pablo joined #gluster
06:43 topshare joined #gluster
06:44 tagati joined #gluster
06:49 bala joined #gluster
06:50 sahina joined #gluster
06:58 SOLDIERz joined #gluster
06:58 tagati hi there! two quick questions: 1) is it compulsory for a gluster brick to be a filesystem mounted with xattr support? 2) does xfs already have built-in xattr support, i.e. it does not need to be explicitly mounted as such?
06:59 atinmu tagati, answer for both 1 & 2 are 'yes'
07:02 harish joined #gluster
07:05 sac_ joined #gluster
07:06 tagati atinmu, outstanding, thanks so much
07:07 karnan joined #gluster
07:13 anoopcs joined #gluster
07:21 atinmu tagati, pleasure
07:23 jtux joined #gluster
07:27 jiffin joined #gluster
07:28 rgustafs joined #gluster
07:45 M28 left #gluster
07:50 dusmant joined #gluster
07:50 bala joined #gluster
07:54 hagarth joined #gluster
08:00 Debloper joined #gluster
08:11 tetreis joined #gluster
08:12 pkoro joined #gluster
08:12 vimal joined #gluster
08:14 TehStig joined #gluster
08:14 deniszh joined #gluster
08:20 kovshenin joined #gluster
08:21 fsimonce joined #gluster
08:22 topshare joined #gluster
08:23 dizzystreak_ joined #gluster
08:23 topshare joined #gluster
08:24 topshare joined #gluster
08:24 sahina joined #gluster
08:26 mator what the difference between glusterd and glusterfsd ? thanks
08:26 hagarth mator: glusterd is the management daemon, glusterfsd is the data server daemon
08:33 lalatenduM joined #gluster
08:34 overclk joined #gluster
08:35 RameshN joined #gluster
08:35 johndescs hi, I followed that to expand a rep 2 volume: http://review.gluster.org/#/c/8503/2/doc/admin-guide/en-US/markdown/admin_replace_brick.md
08:35 johndescs but it seems I had to launch healing by hand at the end
08:35 SOLDIERz joined #gluster
08:35 johndescs could someone update that doc and maybe publish it? :)
08:37 anoopcs joined #gluster
08:39 Anuradha joined #gluster
08:46 mator i'm upgrading from 3.2.x to 3.3.x now
08:47 codex joined #gluster
08:47 mator hagarth, should glusterd to be running on all nodes or just on one ?
08:47 codex joined #gluster
08:48 atinmu mator, it should be on all the nodes
08:48 mator k, thanks
08:50 mator [root@UAK1-NAS-SRV2 glusterd]# mount /mnt/glusterfs/
08:50 mator unknown option _netdev (ignored)
08:50 mator Mount failed. Please check the log file for more details.
08:50 mator fstab ->
08:50 mator localhost:/cdn-vol1 /mnt/glusterfs glusterfs defaults,_netdev 0 0
08:51 mator can you please tell me, what logs should I expect
08:51 LebedevRI joined #gluster
08:52 [Enrico] joined #gluster
08:53 devilspgd I'm adding bricks to a replicate volume, looks like it's replicating files as they're accessed. What will get everything replicated?
08:53 devilspgd Trigger a self-heal? Or a "find...stat" command?
08:54 ninkotech joined #gluster
08:54 ninkotech_ joined #gluster
08:55 devilspgd I'm not 100% sure a self-heal will do the trick, and it takes ages to find out
08:55 tagati joined #gluster
08:55 khelll joined #gluster
08:55 Pupeno joined #gluster
08:55 Pupeno joined #gluster
09:01 kaushal_ joined #gluster
09:03 jaank joined #gluster
09:03 cristian joined #gluster
09:05 mator where do i get glusterfsd.vol file?
09:06 mator /etc/init.d/glusterfsd refuses to start because i don't have this file
09:06 sahina joined #gluster
09:06 anil joined #gluster
09:07 dusmant joined #gluster
09:10 karnan joined #gluster
09:16 cristian Hi guys, any of you saw before very low glusterfs write performance? I get like 140MB/s to read from it, and 3-4MB/s to write to it. I've got 3 bricks on 3 different servers, replicated: http://pastebin.com/z1gT3X2K
09:16 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:16 spandit joined #gluster
09:17 ppai joined #gluster
09:17 cristian Here's the paste.ubuntu link to config and tests: http://paste.ubuntu.com/9453865/
09:18 cristian I find this quite strange, since writing directly to the ext3 disk (on top of which glusterfs resides) I get ~90MB/s
09:21 cristian Write speed 24-30 times lower that writing directly to disk is really unacceptable, maybe I've done something wrong, any hinds?
09:21 cristian s/hinds/hints
09:25 ivok joined #gluster
09:27 khelll joined #gluster
09:36 ghenry joined #gluster
09:46 ndevos mator: you do not need a glusterfsd.vol file with current versions, those get generated automatically with the gluster command
09:46 ndevos mator: you really should use version 3.4 or newer, older versions wont get any updates anymore
09:47 atinmu joined #gluster
09:48 mator i can't upgrade from 3.2
09:48 mator i haven't glusterfsd.vol in 3.2
09:48 mator and 3.3 requires it to start
09:49 atinmu mator, missing volfiles means you have insufficient configuration details
09:49 mator ndevos, I have another testing fedora20 glusterfs cluster (2 nodes), it doesn't have glusterfsd.vol as well
09:49 ndevos the glusterfsd processes should get started by glusterd - the glusterfsd service script should only stop the glusterfsd processs
09:49 mator but works
09:50 ndevos mator: right, fedora 20 uses a current version
09:50 mator i wonder is there someone with 3.3 / 3.4 / 3.5 versions of glusterfs, could someone show me how it can look like
09:51 ndevos mator: it does not exist, each glusterfsd process gets its own .vol file created automatically in those versions
09:51 spandit joined #gluster
09:52 ndevos @upgrade
09:52 glusterbot ndevos: I do not know about 'upgrade', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
09:52 mator installed glusterfs-3.3 version rpms,  glusterd starts, but no glusterfsd processes
09:52 ndevos mator: you only get glusterfsd processes on the server(s) that host bricks
09:52 mator http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
09:53 ndevos yes, those are the ,,(3.3 upgrade notes)
09:53 glusterbot http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
09:54 ndevos but, 3.3 also does not get any updates anymore, if you are upgrading, you really should move to a newer version
09:55 mator ndevos, init script for "glusterfsd start" requires /etc/glusterfs/glusterfsd.vol file, but it doesn't exist in 3.2 version
09:55 ndevos mator: that script is provided as legacy compatibility - you do not need to start that on 3.3
09:55 mator ndevos, so basically i wanted to check is 3.3 work, before going futher (installing 3.4 and 3.5)
09:56 mator on my server node (with bricks) glusterd starts, but there's no glusterfsd (brick daemons) started
09:57 ndevos mator: and 'gluster volume info' shows that there are volumes started that host bricks on that server?
09:57 * ndevos thinks atinmu knows more about the details for that
09:58 mator http://fpaste.org/158276/20545414/
09:59 atinmu hey mator, gluster volume info shows there are no volumes
09:59 mator how come, it was in 3.2 version of rpms installed =)
09:59 mator as well in /var/log/glusterfs/* logs
10:00 atinmu can u check 'ls -l /var/lib/glusterd/'
10:00 mator what file/files i should inspect there ?
10:01 mator http://fpaste.org/158277/14182057/
10:02 xrsa joined #gluster
10:02 atinmu mator, do u have vols folder there?
10:02 mator atinmu, http://fpaste.org/158277/14182057/
10:02 kshlm joined #gluster
10:03 TehStig joined #gluster
10:03 atinmu mator, I do see there are vol files
10:05 atinmu mator, can u re-execute gluster volume info and paste the last 50 lines of glusterd log?
10:05 atinmu mator, prolly kill glusterd and bring up as glusterd -LDEBUG
10:05 mator atinmu, i need to raise log level for glusterd first
10:05 mator yeah
10:05 mator give me a minute
10:06 atinmu mator, I am going for a meet, will be back in sometime
10:07 karnan joined #gluster
10:08 mator http://fpaste.org/158278/06096141/
10:12 Dw_Sn joined #gluster
10:14 mator thanks
10:14 mator you are my only hope =)
10:15 mator but i wasn't expected any troubles with upgrade...
10:19 mator is it possible to create a new volume with old data bricks?
10:19 mator what do you think?
10:20 Dw_Sn okay I am gonna start testing Gluster for oVirt, in the very short quick start guide it mentions that i need a sperated DISK, this can be an LVM LUN ?
10:21 mator i mean to create a new volume with preserving all data on bricks, would it work, do you probably know? thanks. Im considering it as a last resort option
10:21 sahina joined #gluster
10:23 ndarshan joined #gluster
10:23 Dw_Sn http://www.gluster.org/community/documentation/Getting_started_setup_virt <-- 404
10:23 glusterbot Dw_Sn: <'s karma is now -5
10:24 Dw_Sn http://www.gluster.org/documentation/Getting_started_common_criteria/ <-- all links in this page is 404 !
10:24 atinmu mator, can u execute gluster v info cdn-vol1 ?
10:24 glusterbot Dw_Sn: <'s karma is now -6
10:25 mator atinmu, volume doesn't exist, let me fpaste logs
10:26 mator atinmu, http://fpaste.org/158282/41820716/
10:27 mator netstat -anlp | grep gluster
10:27 mator http://fpaste.org/158283/07257141/
10:27 Dw_Sn so any idea if I can use LVM as gluster bricks backend device ?
10:31 mator Dw_Sn, but why?
10:34 tetreis joined #gluster
10:37 devilspgd I'm adding bricks to a replicated volume, looks like it's replicating files as they're accessed. What will get everything replicated immediately? Trigger a self-heal? Or a "find...stat" type command?
10:37 _shaps_ joined #gluster
10:43 mator fixed it
10:43 mator thanks
10:43 karnan joined #gluster
10:43 atinmu joined #gluster
10:44 ndarshan joined #gluster
10:47 dusmant joined #gluster
10:48 mator atinmu, fixed it
10:48 mator trying to start servers
11:04 atinmu joined #gluster
11:10 mator atinmu, fixed it
11:18 ndarshan joined #gluster
11:21 kshlm joined #gluster
11:22 necrogami joined #gluster
11:23 necrogami joined #gluster
11:24 elico joined #gluster
11:25 Dw_Sn mator: why not ? I like LVM
11:25 Dw_Sn mator: it is easy to extend and so on, so why not ?
11:26 Slashman joined #gluster
11:32 nbalacha joined #gluster
11:37 nbalacha joined #gluster
11:39 dgandhi joined #gluster
11:41 dgandhi joined #gluster
11:42 dgandhi joined #gluster
11:43 dgandhi joined #gluster
11:45 dgandhi joined #gluster
11:47 dgandhi joined #gluster
11:47 meghanam joined #gluster
11:47 meghanam_ joined #gluster
11:48 dgandhi joined #gluster
11:49 dgandhi joined #gluster
11:50 soumya__ joined #gluster
11:50 ndevos REMINDER: Weekly Gluster Community meeting starting in 10 minutes in #gluster-meeting
11:50 dgandhi joined #gluster
11:51 TehStig joined #gluster
11:52 dgandhi joined #gluster
11:54 dgandhi joined #gluster
11:55 dgandhi joined #gluster
11:55 dgandhi joined #gluster
11:56 dgandhi joined #gluster
11:58 dgandhi joined #gluster
11:59 dgandhi joined #gluster
11:59 rafi joined #gluster
12:00 DV joined #gluster
12:01 dgandhi joined #gluster
12:02 dgandhi joined #gluster
12:04 lpabon joined #gluster
12:04 dgandhi joined #gluster
12:05 atalur joined #gluster
12:06 ndarshan joined #gluster
12:06 dgandhi joined #gluster
12:08 dgandhi joined #gluster
12:09 bennyturns joined #gluster
12:09 dgandhi joined #gluster
12:11 dgandhi joined #gluster
12:12 dgandhi joined #gluster
12:13 mator Dw_Sn, if you're going to extend glusterfs which is on top of lvm, why not just to add another brick to glusterfs directly without any additional layers (like lvm) ?
12:17 atinmu joined #gluster
12:21 Dw_Sn mator: it make sense in this case, however I might be adding disks to the same not fully used machine
12:23 nshaikh joined #gluster
12:25 dusmant joined #gluster
12:26 itisravi_ joined #gluster
12:28 ndarshan joined #gluster
12:32 DV joined #gluster
12:32 hagarth joined #gluster
12:36 diegows joined #gluster
12:41 rafi joined #gluster
12:42 RameshN joined #gluster
12:45 topshare joined #gluster
12:46 topshare joined #gluster
12:48 chirino joined #gluster
12:57 lalatenduM joined #gluster
13:00 anoopcs joined #gluster
13:00 partner is there any way to target fix-layout to a specific problematic directory as seen from the logs (layout mismatch) ?
13:00 partner i'd still like to get this done faster than 45 days as there's only couple of dirs causing the log flood on the clients
13:02 coredump|br joined #gluster
13:04 lalatenduM joined #gluster
13:05 tdasilva joined #gluster
13:06 rafi1 joined #gluster
13:07 rafi1 joined #gluster
13:13 topshare joined #gluster
13:18 edward1 joined #gluster
13:19 sac_ joined #gluster
13:40 TehStig joined #gluster
13:45 ppai joined #gluster
13:48 dusmant joined #gluster
13:49 calum_ joined #gluster
13:53 rjoseph joined #gluster
13:53 ndarshan joined #gluster
13:53 SOLDIERz joined #gluster
13:59 bene2 joined #gluster
13:59 B21956 joined #gluster
14:00 mator is there's a solution to own locking?
14:00 mator [2014-12-10 13:39:02.455355] E [glusterd-syncop.c:863:gd_lock_op_phase] 0-management: Failed to acquire lock
14:00 mator [2014-12-10 13:57:41.341981] E [glusterd-utils.c:153:glusterd_lock] 0-management: Unable to get lock for uuid: 0281863b-11bb-4675-9e71-3423adaabf1c, lock held by: 0281863b-11bb-4675-9e71-3423adaabf1c
14:01 mator glusterfs-3.5.3-1.el6.x86_64
14:01 sachin joined #gluster
14:02 ndevos uh, I think kshlm has seen that issue before
14:05 ppai joined #gluster
14:06 mator should "gluster peer status" include its own server name in the output ?
14:09 glusterbot News from newglusterbugs: [Bug 1172641] Use Gluster supplied logrotate for Centos 6 <https://bugzilla.redhat.com/show_bug.cgi?id=1172641>
14:14 julim joined #gluster
14:17 sachin_ joined #gluster
14:18 sachin joined #gluster
14:21 hagarth joined #gluster
14:24 pdrakeweb joined #gluster
14:27 getup joined #gluster
14:28 kodapa left #gluster
14:29 calisto joined #gluster
14:31 kkeithley @paste
14:31 glusterbot kkeithley: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
14:33 nbalacha joined #gluster
14:40 tdasilva joined #gluster
14:40 ppai joined #gluster
14:43 virusuy joined #gluster
14:46 bennyturns joined #gluster
14:46 ricky-ti1 joined #gluster
14:54 _pol joined #gluster
14:59 _Bryan_ joined #gluster
15:00 theron joined #gluster
15:03 TehStig joined #gluster
15:08 kdhananjay joined #gluster
15:09 getup joined #gluster
15:10 mator Posix landfill setup failed
15:11 mator wtf
15:12 mator http://fpaste.org/158359/24318141/
15:14 mator anyone can help me with:
15:14 mator [2014-12-10 15:13:55.129586] E [glusterd-utils.c:153:glusterd_lock] 0-management: Unable to get lock for uuid: b8afc841-ce57-4dc6-b8c7-e2a6abcc6a61, lock held by: b8afc841-ce57-4dc6-b8c7-e2a6abcc6a61
15:16 wushudoin joined #gluster
15:18 dgagnon joined #gluster
15:18 dgagnon thats me
15:19 kmai007 mator: did something change in your vol file?
15:19 J_Man joined #gluster
15:19 kmai007 i'm not expert, but i was just trying to get some background info to help
15:20 mator kmai007, probably yes, since i went upgrade from 3.2.x to 3.3.x -> 3.4.x -> 3.5.x today
15:20 kmai007 i also upgraded from 3.4.3 -> 3.5.3 this week, let me check my logs if i have the same verbage
15:21 kmai007 is that a brick log?
15:21 mator kmai007, "Posix landfill setup failed" is for the brick, i have 5 bricks on this server, and only 3 is online (glusterfsd is running for them)
15:21 mator while "Unable to get lock for uuid" is from glusterd log file
15:22 kmai007 when you upgraded, did you stop,kill, kill, kill all the gluster processes?
15:22 mator all glusterfs cluster nodes shutdown (glusterd stopped) except one
15:22 mator kmai007, yes
15:22 mator double checked
15:23 kmai007 i read in the changelog, that if /var/log/gluster isn't there, it may cause problems?
15:23 mator do somebody knows, is it possible to downgrade back from glusterfs-3.5.x to older versions?
15:23 kmai007 in 3.5.3
15:26 basso joined #gluster
15:26 nocturn joined #gluster
15:28 kmai007 mator: i found how to do it on redhat OS https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Installation_Guide/ch07s02s05s03.html
15:28 kmai007 not sure if its applicable
15:28 meghanam joined #gluster
15:28 meghanam_ joined #gluster
15:28 kmai007 still won't guarantee that it will get the other bricks up,
15:29 kmai007 should probably try to work through it, if its not causing you too much heart-burn,
15:31 calisto joined #gluster
15:32 mator kmai007, thanks
15:36 julim joined #gluster
15:36 bala joined #gluster
15:37 ninkotech_ joined #gluster
15:37 plarsen joined #gluster
15:40 mator can someone post me "gluster peer status" from their cluster? thanks
15:41 jobewan joined #gluster
15:44 partner sure, sec
15:44 partner http://fpaste.org/158372/14182262/
15:45 partner that is a two server test setup i had open
15:45 partner 3.4.5 version in case that matters
15:46 mator thanks
15:47 mator somehow my 9 nodes cluster show its own name in peer output...
15:47 mator and it does not have file with its UUID in /var/lib/glusterd/peers/
15:48 mator https://www.mail-archive.com/gluster-users@gluster.org/msg11707.html
15:50 partner haven't been following, have you perhaps manually tuned some volume files due to issues and as an effort to fix them?
15:50 dgagnon left #gluster
15:50 mrlesmithjr joined #gluster
15:50 virusuy hey guys, i'm running glusterfs 3.6.1
15:51 virusuy with a replicated volume (2 nodes)
15:51 virusuy but mount doesn't work
15:51 virusuy either nfs or using glusterfs
15:51 Jack[z] joined #gluster
15:51 gothos doesn't work? please supply error messages
15:52 partner and mount command and what not, no firewall in between?
15:52 virusuy gothos: hi, in fact, commands doesn't output errors at all
15:52 virusuy partner: no firewall
15:52 mrlesmithjr anyone here using Gluster+ZFS(ZOL)+NFS+vSphere
15:52 virusuy actually, i'm running mount in both nodes
15:52 partner have you started your volume and you trying to mount that volume and not brick straight?
15:53 virusuy partner: yes, volumes are started, and i executed mount node:/vol /path
15:53 virusuy and also using -t glusterfs
15:54 partner hmm
15:54 virusuy without error message at all, weird right ?
15:54 partner and no errors.. maybe check on the client box the logs if they would hint anything?
15:54 partner that would be /var/log/glusterfs/your-mount-point.log
15:54 gothos virusuy: I dont know if this somehow affects nfs, but... what is your LC_NUMERIC?
15:55 gothos virusuy: locale | grep NUM
15:55 virusuy LC_NUMERIC="es_UY.UTF-8"
15:56 gothos virusuy: okay, please try to set it to en_US.UTF-8 and try again
15:56 virusuy gothos: on my way !
15:56 gothos you might be hitting a bug
15:56 virusuy using export, right ?
15:56 gothos yes
15:56 gothos export LC_NUMERIC=en_US.UTF-8
15:57 virusuy LC_NUMERIC=en_US.UTF-8
15:57 virusuy ok, time to test
15:57 virusuy gothos: wow
15:57 virusuy bug indeed
15:57 gothos okay
15:58 kanagaraj joined #gluster
15:58 virusuy i mean, now works using mount.glustefs ....
15:58 gothos there is already a fix in the 3.6 branch, so next minor release should be fine
15:58 gothos I don't know about nfs
15:58 gothos never checked if that is affected by that bug as well, but it would surprise me
15:58 virusuy gothos: nfs respond as there's no server answering
15:58 gothos *shrug* nu clue there
15:59 virusuy gothos: anyway, thanks for your help !
15:59 virusuy i'll use mount.glusterfs anyway
15:59 gothos np
16:01 tetreis joined #gluster
16:03 rwheeler joined #gluster
16:07 deniszh1 joined #gluster
16:08 nbalacha joined #gluster
16:09 maveric_amitc_ joined #gluster
16:10 crashmag joined #gluster
16:11 partner hmm interesting
16:12 partner but good catch, wasn't aware of that bug
16:13 coredump joined #gluster
16:16 virusuy partner: yeah
16:18 gothos virusuy: was that you on the mailing list?
16:18 gothos sounded like the same problem
16:18 virusuy gothos: nope, i'm not subscribed to the mailing list (i know, my bad)
16:19 gothos virusuy: np, just thought it might have been you, since it sounds very similiar
16:19 gothos it == the problem
16:19 virusuy gothos: :-)
16:20 virusuy gothos: i'm reading that mail through mailling archive
16:20 virusuy and yes, sounds VERY similar
16:21 ricky-ticky1 joined #gluster
16:21 partner sorry but nth time have to ask can i perform a targetted fix-layout somehow, even if it involves touching some attributes by hand?
16:24 mator is it possible to create a brand new glusterfs using data from old bricks (i.e. does not format them) ?
16:25 Philambdo joined #gluster
16:25 ninkotech_ joined #gluster
16:26 samkottler left #gluster
16:28 _dist joined #gluster
16:30 partner hmm, not sure if you will need to remove the .glusterfs structure anyways and offer plain files for it and then heal the volume
16:35 calisto joined #gluster
16:37 gothos probably, and might also remove existing xattr?
16:37 gothos s/might/maybe/
16:37 glusterbot What gothos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:42 ninkotech_ joined #gluster
16:42 dgandhi joined #gluster
16:46 vimal joined #gluster
16:47 partner but, i would lab that first before signing, i've heard success stories
16:49 julim joined #gluster
16:52 _pol joined #gluster
17:05 gstock_ joined #gluster
17:06 julim joined #gluster
17:21 TrDS joined #gluster
17:25 lmickh joined #gluster
17:35 feeshon joined #gluster
17:35 TrDS left #gluster
17:39 Philambdo joined #gluster
18:01 _pol joined #gluster
18:17 JoeJulian partner: Yes, you can do that.
18:28 PeterA joined #gluster
18:31 primusinterpares joined #gluster
18:44 khelll joined #gluster
18:50 zerick joined #gluster
18:59 calisto joined #gluster
19:00 free_amitc_ joined #gluster
19:01 _pol joined #gluster
19:07 sashko joined #gluster
19:07 sashko hey everyone
19:08 partner yeah empty but for used brick, kind of would assume cleanup of .glusterfs is in place, won't affect the files of course
19:08 JoeJulian hey
19:08 sashko hey JoeJulian, long time no see :)
19:08 sashko seems like everyone is still here
19:09 JoeJulian Mostly
19:09 partner stuck with the technology :)
19:09 sashko hehe
19:09 sashko it's not stuck if it's working :)
19:10 JoeJulian partner: You can edit the dht mapping on the bricks or you can trigger a fix-layout on a per-directory basis through a client mount by setting a psudo xattr.
19:10 partner one of our system reports "stuck working" while often they complain stuck idle or stuck something..
19:10 JoeJulian That's actually the same process as rebalance...fix-layout.
19:11 partner JoeJulian: oh, sorry, i thought you were commenting the other stuff i commented..
19:11 partner so the targetted fix-layout
19:11 JoeJulian right
19:11 sashko do you guys know if there are any recommendations for creating a new gluster storage with 2x bricks replica and 1 million files? If i copy it through the client it will take a long time, can I copy it to the bricks and then create gluster on top of the files that already exist?
19:12 JoeJulian @lucky gluster setfattr targetted fix-layout
19:12 partner i only have couple of directories complaining about the layout but on that volume i have probably 512k directories.. i would _really_ much like to target it to the problematic ones only on this case
19:12 glusterbot JoeJulian: http://gluster.org/pipermail/gluster-users.old/2013-January/012302.html
19:12 partner thanks, it seems my googling skills are degrading day by day :/
19:12 JoeJulian Ahhh, my google-fu is strong today.
19:13 chirino joined #gluster
19:13 partner that makes karma +1 again ;)
19:13 JoeJulian Plus, I've seen it before so I had some idea what I was looking for.
19:13 partner i'll dig into that, probably back with bunch of stupid questions shortly
19:15 sashko JoeJulian: do you have any recommendations?
19:15 JoeJulian I'd actually defer to semiosis on that one if he's available. He does that occasionally.
19:16 sashko oh cool
19:16 sashko semiosis: you around? Need to create a new storage simple replica with 1 million files, copy through client is slow, was wondering if I can just copy them to the bricks and then start gluster on top of those bricks?
19:17 sashko JoeJulian: btw which version are you running in production right now?
19:19 JoeJulian 3.4.4, though I recommend 3.5.3.
19:19 partner that's actually the other question i was earlier referring to, the one mator was asking. special offer, two satisfied clients at the price of one! ;)
19:20 sashko JoeJulian: any particular reason for the 3.5.3 recommendation?
19:24 JoeJulian Various bug fixes and redesigns that add stability.
19:24 TrDS joined #gluster
19:32 Philambdo joined #gluster
19:50 primusinterpares Anyone verifying that there isn't a knob that allows subdirectory mounts by the native client in the same manner you can do subdirectory mounts with an NFS client?
19:50 primusinterpares Sorry, anyone mind verifying for me, is what that should have said.
19:50 dberry joined #gluster
19:51 dberry joined #gluster
20:03 semiosis primusinterpares: pretty sure there's no such knob
20:03 semiosis primusinterpares: however a bind mount might be useful
20:05 semiosis sashko: i've heard you can start with one brick full & the other empty then let healing fill the empty replica
20:05 semiosis JoeJulian: what do I do occasionally?
20:06 sashko JoeJulian: I see, but it's production ready you think?
20:06 sashko semiosis: how long does that usually take?
20:07 semiosis sashko: how long is long?
20:07 semiosis sorry, i dont know, never tried it, dont know anything about your setup, impossible to say
20:07 sashko i think over a day, it's a simple 2x brick replica with one client writing all the files
20:19 khelll joined #gluster
20:25 XpineX joined #gluster
20:44 B21956 joined #gluster
20:45 pdrakeweb joined #gluster
21:07 _pol joined #gluster
21:19 Philambdo joined #gluster
21:27 kmai007 joined #gluster
21:30 Pupeno joined #gluster
21:51 barnim joined #gluster
21:55 diegows joined #gluster
22:03 badone joined #gluster
22:12 lpabon joined #gluster
22:36 JoeJulian semiosis: What you do occasionally is start a whole new gluster installation, copying over the old data.
22:36 JoeJulian semiosis: Which is what sashko was asking about doing.
22:36 semiosis well i did that once, but i moved the disks over with all the data on them already
22:36 semiosis both replicas were already full
22:50 jobewan joined #gluster
22:56 sashko ah ok, semiosis
22:56 sashko there should be a good way to do this, would help people move things faster into gluster
22:59 JoeJulian ... you mean like time travel?
22:59 free_amitc_ joined #gluster
23:21 partner would be cool if this project was the first to figure it out :)
23:24 PeterA joined #gluster
23:26 PeterA joined #gluster
23:41 plarsen joined #gluster
23:42 Telsin joined #gluster
23:45 gildub joined #gluster
23:53 khelll joined #gluster
23:56 tetreis joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary