Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 BlackoutWNCT1 joined #gluster
00:10 jdossey joined #gluster
00:14 armyriad joined #gluster
00:15 Caveat4U joined #gluster
00:35 Klas joined #gluster
00:36 MikeLupe Where does ovirt get the info about bricks "Self-Heal info" status?? Underlying Gluster has NO unsynced state. I don't get it
00:36 MikeLupe Getting crazy after 3 days of research
00:57 MikeLupe Great I wrote in th wrong channel. sry
01:26 nthomas_ joined #gluster
01:53 jbrooks joined #gluster
02:05 phileas joined #gluster
02:17 Caveat4U joined #gluster
02:24 gem joined #gluster
02:31 Telsin joined #gluster
02:37 derjohn_mobi joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:09 gem joined #gluster
03:13 nirokato joined #gluster
03:20 nirokato ping, anyone available for a quick assist? Running gluster 3.8.8 on a new host and need to re-import the data directly from datastore (old host took a header, previous guy thought it was smart to run glusterfs w/o redundancy using a single brick). How do I get started?
03:27 nirokato brb, forgot to load tmux before launching weechat
03:28 nirokato joined #gluster
03:38 lkoranda joined #gluster
04:12 nirokato 40 minutes guys, anything? I'm dying here.
04:13 JoeJulian 40 minutes is nothing on a Saturday. You're just lucky I needed to use my computer this evening. :)
04:13 JoeJulian So what do you mean "re-import"?
04:14 JoeJulian My assumption is that you have a brick but no volume configuration?
04:14 nirokato I have the whole datastore available. Is the .glusterfs directory necessary for re-creating the brick?
04:14 nirokato JoeJulian: yes.
04:15 nirokato JoeJulian: thank you for responding, I _really_ appreciate it.
04:16 JoeJulian What I would do is *not* mount the brick. Create the new volume. Get the volume-id of the newly created volume (getfattr -m . -d -e hex $brick_root).
04:16 nirokato creating new volume, be a few
04:17 JoeJulian Mount the brick. Set the volume-id to that of the newly created volume (setfattr). Start the volume and voila!
04:19 Caveat4U joined #gluster
04:20 bwerthmann joined #gluster
04:21 nirokato JoeJulian: can you be more specific by "mount the brick" I want to make sure I'm doing this correctly (fairly new to glusterfs, previous guy didn't train me)
04:22 nirokato Do you mean mount it to the gluster volume?
04:22 JoeJulian I assume the data you still have was on something that can be mounted.
04:22 nirokato JoeJulian: it's already mounted to a directory currently. Do I need to re-mount it in place of the new brick?
04:23 JoeJulian Yes. Get the volume-id of the new brick first so you will be able to change it on the old brick.
04:23 nirokato Ah, thank you. Just wanted to be sure.
04:25 nirokato I have the volume-id of the brick, I'll remount the old brick in the new brick's place.
04:25 nirokato Should I take the volume offline before I do this?
04:26 JoeJulian If you started the new volume, yes. Stop it.
04:28 nirokato Ok, volume stopped and old brick mounted in new brick's place. What options do I need to pass to setfattr?
04:29 JoeJulian I forget what the actual xattr name is, but use that with -n. And whatever the value was with -v.
04:33 nirokato https://bugzilla.redhat.com/show_bug.cgi?id=991084 ?
04:33 glusterbot Bug 991084: high, unspecified, ---, bugs, CLOSED EOL, No way to start a failed brick when replaced the location with empty folder
04:33 nirokato first instance of setfattr, is that what I'm looking for?
04:34 nirokato er, second instance on that page
04:39 nirokato I could just do: setfattr -n trusted.glusterfs.volume-id -v [hex volume-id] /path/to/new/brick
04:39 nirokato is that correct?
04:41 nirokato e.g.: setfattr -n trusted.glusterfs.volume-id -v 0xe52db3ccdae140ec9e32c2389ead76c6 /mnt/storage/gfs
04:42 JoeJulian correct
04:42 atinm joined #gluster
04:44 mb_ joined #gluster
04:50 nirokato JoeJulian: ok, so I started the volume after performing the setfattr, but I'm getting the following error: [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
04:51 nirokato this is on mount on another host
04:51 nirokato Is it related?
04:52 nirokato nvm, typo in volume name
04:53 nirokato It's up, you are amazing JoeJulian, thank you!
05:40 atinm joined #gluster
05:48 Lee1092 joined #gluster
06:04 rafi joined #gluster
06:07 nthomas_ joined #gluster
06:17 ashiq joined #gluster
06:21 Caveat4U joined #gluster
06:27 victori joined #gluster
06:46 victori joined #gluster
06:49 nthomas_ joined #gluster
07:22 Wizek joined #gluster
07:34 Humble joined #gluster
07:45 musa22 joined #gluster
07:57 victori joined #gluster
07:58 Caveat4U joined #gluster
08:11 bbooth joined #gluster
08:11 niknakpaddywak joined #gluster
08:26 klma joined #gluster
08:33 klma joined #gluster
08:46 ahino joined #gluster
08:51 Shu6h3ndu joined #gluster
09:45 k4n0 joined #gluster
09:49 rwheeler joined #gluster
09:52 victori joined #gluster
09:59 Caveat4U joined #gluster
10:08 Shu6h3ndu joined #gluster
10:11 d-fence joined #gluster
10:11 d-fence_ joined #gluster
10:38 kkeithley joined #gluster
10:40 toshywoshy left #gluster
10:52 Shu6h3ndu joined #gluster
11:00 Klas joined #gluster
11:06 anoopcs joined #gluster
11:15 ankit joined #gluster
11:39 Klas joined #gluster
11:42 klma joined #gluster
11:58 masber joined #gluster
12:01 Caveat4U joined #gluster
12:31 foster joined #gluster
12:34 sona joined #gluster
12:59 musa22 joined #gluster
13:24 nirokato joined #gluster
13:25 gluytium joined #gluster
13:35 skoduri joined #gluster
13:41 sona joined #gluster
13:58 ahino1 joined #gluster
14:03 Caveat4U joined #gluster
14:05 hchiramm_ joined #gluster
14:11 nthomas_ joined #gluster
14:11 TFJensen Hi there.. Need some Newbie help. Do I have to stop gluster before I do a lvresize on my data volume ?
14:20 rafi joined #gluster
14:37 MikeLupe TFJensen: I did that once on an ovirt gluster r3a1 and as far as I remember it was pretty easy - I mean vgextend the needed gluster_vg, then "lvextend -l +100%FREE /dev/*volumegroup*/directory" and then "xfs_growfs /dev/*volumegroup*/directory". But I think I put the node into maintenance to be sure.
14:37 MikeLupe I think JoeJulian helped me on that one back then
14:47 hchiramm_ joined #gluster
14:59 flying joined #gluster
15:01 sbulage joined #gluster
15:05 TFJensen I did a lvextend -l -r +100%FREE /dev/gluster/data, but now the size is bigger than the vol group. How can I get back...
15:05 TFJensen lvresize -r -l +100%FREE /dev/gluster/data
15:05 TFJensen WARNING: Sum of all thin volume sizes (2.18 TiB) exceeds the size of thin pool gluster/lvthinpool and the size of whole volume group (1.71 TiB)!
15:05 TFJensen For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
15:05 TFJensen Size of logical volume gluster/data changed from 200.00 GiB (51200 extents) to 1.79 TiB (469388 extents).
15:05 TFJensen Logical volume gluster/data successfully resized.
15:12 MikeLupe it worked?
15:15 MikeLupe TFJensen: you did "vgextend *volumegroup* /dev/mapper/*physicaldevice*" for an newly added disk?
15:15 MikeLupe or similar ;)
15:21 Gambit15 joined #gluster
15:37 mb_ joined #gluster
15:38 TFJensen yes I did that first
15:39 TFJensen vgextend gluster /dev/md129
15:40 TFJensen I did a lvreduce --size 800GB /dev/gluster/data
15:41 mb_ joined #gluster
15:41 TFJensen WARNING: Reducing active and open logical volume to 800.00 GiB.
15:41 TFJensen THIS MAY DESTROY YOUR DATA (filesystem etc.)
15:41 TFJensen Do you really want to reduce gluster/data? [y/n]: y
15:41 TFJensen Size of logical volume gluster/data changed from 3.37 TiB (883395 extents) to 800.00 GiB (204800 extents).
15:41 TFJensen Logical volume gluster/data successfully resized.
15:42 TFJensen df -h reports:
15:42 TFJensen "/dev/mapper/gluster-data    3.4T   42M  3.4T   1% /gluster/brick2"
15:44 TFJensen fdisk -l reports:
15:44 TFJensen Disk /dev/mapper/gluster-data: 859.0 GB, 858993459200 bytes, 1677721600 sectors
15:44 TFJensen Units = sectors of 1 * 512 = 512 bytes
15:44 TFJensen Sector size (logical/physical): 512 bytes / 512 bytes
15:44 TFJensen I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
15:45 arpu joined #gluster
15:46 musa22 joined #gluster
15:50 derjohn_mobi joined #gluster
16:05 derjohn_mobi joined #gluster
16:05 Caveat4U joined #gluster
16:13 derjohn_mobi joined #gluster
16:26 ankit joined #gluster
16:42 jri joined #gluster
16:43 bbooth joined #gluster
16:53 Caveat4U joined #gluster
17:00 jbrooks joined #gluster
17:01 mb_ joined #gluster
17:04 plarsen joined #gluster
17:14 jri joined #gluster
17:18 haomaiwang joined #gluster
18:13 haomaiwang joined #gluster
18:16 renout_away joined #gluster
18:22 Jacob843 joined #gluster
18:26 ahino joined #gluster
18:35 sona joined #gluster
18:51 armyriad joined #gluster
18:55 Caveat4U joined #gluster
19:07 arpu joined #gluster
19:07 haomaiwang joined #gluster
19:27 jiffin joined #gluster
19:37 mhulsman joined #gluster
19:47 musa22 joined #gluster
20:24 jkroon joined #gluster
20:24 nh2_ joined #gluster
20:25 mhulsman joined #gluster
20:28 jdossey joined #gluster
20:31 daMaestro joined #gluster
20:38 haomaiwang joined #gluster
20:43 mhulsman joined #gluster
20:44 mhulsman joined #gluster
20:44 jdossey joined #gluster
20:44 mhulsman joined #gluster
20:45 mhulsman joined #gluster
20:46 mhulsman joined #gluster
20:46 victori joined #gluster
20:47 mhulsman joined #gluster
20:48 al joined #gluster
20:48 mhulsman joined #gluster
20:48 mhulsman joined #gluster
20:49 mhulsman joined #gluster
20:50 mhulsman joined #gluster
20:51 mhulsman joined #gluster
20:51 al joined #gluster
20:52 mhulsman joined #gluster
20:54 al joined #gluster
20:58 Caveat4U joined #gluster
20:58 al joined #gluster
21:02 pulli joined #gluster
21:04 jri joined #gluster
21:05 victori joined #gluster
21:08 mhulsman joined #gluster
21:09 mhulsman joined #gluster
21:09 nh2_ joined #gluster
21:10 mhulsman joined #gluster
21:11 mhulsman joined #gluster
21:16 victori joined #gluster
21:21 mhulsman joined #gluster
21:22 mhulsman joined #gluster
21:23 alezzandro joined #gluster
21:30 al joined #gluster
21:35 cacasmacas joined #gluster
21:39 mhulsman joined #gluster
21:39 musa22 joined #gluster
21:43 musa22 joined #gluster
21:47 PaulCuzner joined #gluster
21:47 PaulCuzner left #gluster
22:01 mhulsman joined #gluster
22:02 mhulsman joined #gluster
22:03 mhulsman joined #gluster
22:03 mhulsman joined #gluster
22:04 derjohn_mobi joined #gluster
22:04 mhulsman joined #gluster
22:05 mhulsman joined #gluster
22:06 mhulsman joined #gluster
22:09 haomaiwang joined #gluster
22:19 mb_ joined #gluster
22:56 JoeJulian TFJensen: That "df" suggests you also resized the filesystem. If it's xfs, unfortunately xfs cannot shrink. If it's ext4, you can shrink the filesystem to match the block device.
23:03 victori joined #gluster
23:03 masber joined #gluster
23:07 masber joined #gluster
23:09 MikeLupe JoeJulian: you're here?
23:14 cacasmacas joined #gluster
23:21 Caveat4U joined #gluster
23:25 pulli joined #gluster
23:28 BitByteNybble110 joined #gluster
23:37 swebb joined #gluster
23:40 jdossey joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary