在shell脚本中使用awk从ldif中提取数据



你好,我试图从下面的文件中提取两个字段csv格式。

dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }
dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }
dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }
dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }
Lorem ipsum
Lorem ipsum

预期输出

123456,98765432
123456,98765432
...

PersonalId字段始终为6位,uniquetag之后的手机号码始终为8位。如果一个记录的手机号码不存在,那么我们需要跳过该记录。

我能够在一个简单的shell脚本中做到这一点,我逐行查找记录,然后使用regex捕获值:^.*PersonalId.*([0-9]{6}).*uniquetag:([0-9]{8}).*$。它工作并给出我需要的输出。但是对于给定的大约200万条记录(300 Mb的文件大小),处理大约需要一个小时。需要在更大的文件大小上运行这个。我做了一些优化,比如将原始文件分割成多个较小的文件,并并行处理它们。但它并没有提高多少性能。

我觉得我做错了很多事情。可以直接使用awk或sed提取这些字段。是否有一种方法可以直接使用awk或sed?我的当前shell脚本

#!/bin/sh
start=$(date +%s)
_input_file="$1"
_output_file="$2"
_parallel_threads="$3"
_debug_enabled="$4"
_user_regex='^.*PersonalId.*([0-9]{6}).*uniquetag:([0-9]{8}).*$'
debug_log() {
log_message="$1"
if [[ $_debug_enabled == debug* ]]; then
printf "%bn" "$log_message"
fi
}
process_user_record() {
user_record="$1"
output_file_name="$2"
debug_log "processing user record"
record=$(echo "$user_record" | tr -d 'n')
debug_log "$record"
[[ $record =~ $_user_regex ]] && debug_log "user record matched regex" || debug_log "no match"
match_count=${#BASH_REMATCH[@]}
debug_log "Match count is $match_count"
if [[ $match_count -gt 2 ]]; then
pid="${BASH_REMATCH[1]}"
mobile="${BASH_REMATCH[2]}"
debug_log "Writing $pid and $mobile to output"
echo "$pid,$mobile" >>$output_file_name
fi
}
process_record_file() {
record_file_name="$1"
output_file="output_files/${record_file_name##*/}"
user_data=''
touch "$output_file"
debug_log "processing: $record_file_name"
while IFS= read -r line; do
if [[ $line == dn* ]]; then
debug_log 'line matches with dn'
if [ ! -z "$user_data" ]; then
process_user_record "$user_data" "$output_file"
user_data=''
fi
else
debug_log "Appending to user data"
user_data="${user_data}n${line}"
fi
done <"$record_file_name"
if [ ! -z "$user_data" ]; then
process_user_record "$user_data" "$output_file"
user_data=''
fi
}
echo "pid,mobile" >$_output_file
debug_log 'Starting export'
mkdir 'input_files_split'
mkdir 'output_files'
awk -v max=1000 '{print > sprintf("input_files_split/record%02d", int(n/max))} /^$/ {n += 1}' "$_input_file"
declare -i counter=0
for file in input_files_split/*; do
if [[ $counter -ge $_parallel_threads ]]; then
wait
counter=0
fi
process_record_file "$file" &
counter+=1
done
wait
cat output_files/* >>$_output_file
rm -rf input_files_split/
rm -rf output_files/
end=$(date +%s)
runtime=$((end - start))
echo "Export readynTime taken: ${runtime}s"

使用sed

$ sed -n '/PersonalId:/{N;/uniquetag:/s/[^:]*: (.*)n.*uniquetag:([^:]*).*/1,2/p}' input_file
123456,98765432
123456,98765432
123456,98765432
123456,98765432

匹配以PersonalID开头的行。如果下一行匹配uniquetag:,则在捕获括号内捕获所需的数据,然后用反斜杠1返回。

awk -F: '/^PersonalId/{ gsub(" ",""); id=$2; getline; if($3!="") print id","$3 }' csv_file
{m,g}awk '
ORS = substr("n,", NF<+_?!_ :(NF==+_)+!!_,(+_<= NF)~($!NF = $+_)^!_))'
_=2 OFS= FS='^(PersonalId|.+uniquetag)[:][ ]?|[:].*[}].*$' 
123456,98765432
123456,98765432
123456,98765432
123456,98765432

这可能适合您(GNU sed):

sed -nE '/PersonalId:/h
/uniquetag:/{
H;g;s/.*PersonalId: *([0-9]{6}).*uniquetag:([0-9]{8}).*/1,2/p;}' file

复制PersonalId信息到保持空间

uniquetaginfo附加到PersonalIdinfo后,并相应地格式化。

最新更新