My bash script is not making new files
up vote
0
down vote
favorite
Im trying to iterate over a list containing urls, and download the content. The content is piped in json_pp
to beatify it.
But the problem is that it only generates loot_0.json! And keeps overwriting the content.
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp
done > loot/loot_$((COUNTER++)).json
I have also tried
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp > loot/loot_$((COUNTER++)).json
done
Expected behavior should be files
loot_1.json
loot_2.json
loot_3.json
..
bash
add a comment |
up vote
0
down vote
favorite
Im trying to iterate over a list containing urls, and download the content. The content is piped in json_pp
to beatify it.
But the problem is that it only generates loot_0.json! And keeps overwriting the content.
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp
done > loot/loot_$((COUNTER++)).json
I have also tried
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp > loot/loot_$((COUNTER++)).json
done
Expected behavior should be files
loot_1.json
loot_2.json
loot_3.json
..
bash
1
Your first example will not work since COUNTER is incremented outside of the while loop. However, it should contain all pages that were brought in by curl. The second loop should produce individual files, one for each page brought in by curl. That should be right. I tested it on my Ubuntu 18.04 system and it worked. I did not have the "| json_pp", but I had everything else.
– Lewis M
Nov 28 at 19:19
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Im trying to iterate over a list containing urls, and download the content. The content is piped in json_pp
to beatify it.
But the problem is that it only generates loot_0.json! And keeps overwriting the content.
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp
done > loot/loot_$((COUNTER++)).json
I have also tried
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp > loot/loot_$((COUNTER++)).json
done
Expected behavior should be files
loot_1.json
loot_2.json
loot_3.json
..
bash
Im trying to iterate over a list containing urls, and download the content. The content is piped in json_pp
to beatify it.
But the problem is that it only generates loot_0.json! And keeps overwriting the content.
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp
done > loot/loot_$((COUNTER++)).json
I have also tried
#!/bin/bash
COUNTER=0
cat links.txt | while read line; do #links.txt
PAGE=$(curl -s $line)
echo $PAGE | json_pp > loot/loot_$((COUNTER++)).json
done
Expected behavior should be files
loot_1.json
loot_2.json
loot_3.json
..
bash
bash
asked Nov 28 at 18:39
Adam
19610
19610
1
Your first example will not work since COUNTER is incremented outside of the while loop. However, it should contain all pages that were brought in by curl. The second loop should produce individual files, one for each page brought in by curl. That should be right. I tested it on my Ubuntu 18.04 system and it worked. I did not have the "| json_pp", but I had everything else.
– Lewis M
Nov 28 at 19:19
add a comment |
1
Your first example will not work since COUNTER is incremented outside of the while loop. However, it should contain all pages that were brought in by curl. The second loop should produce individual files, one for each page brought in by curl. That should be right. I tested it on my Ubuntu 18.04 system and it worked. I did not have the "| json_pp", but I had everything else.
– Lewis M
Nov 28 at 19:19
1
1
Your first example will not work since COUNTER is incremented outside of the while loop. However, it should contain all pages that were brought in by curl. The second loop should produce individual files, one for each page brought in by curl. That should be right. I tested it on my Ubuntu 18.04 system and it worked. I did not have the "| json_pp", but I had everything else.
– Lewis M
Nov 28 at 19:19
Your first example will not work since COUNTER is incremented outside of the while loop. However, it should contain all pages that were brought in by curl. The second loop should produce individual files, one for each page brought in by curl. That should be right. I tested it on my Ubuntu 18.04 system and it worked. I did not have the "| json_pp", but I had everything else.
– Lewis M
Nov 28 at 19:19
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
The following shellscript works for me. I have problems with json_pp
, it complains about malformed JSON string, neither array, object, number, string or atom
.
#!/bin/bash
counter=0
while read line;
do
# wget "$line" -O "${line##*/}$((counter++))"
page=$(curl -s "$line")
echo "$page" > "loot_$((counter++))"
done < links.txt
I prefer lower case variables (to decrease the risk of conflict with already existing environment variables).
I redirect from the input file (and avoid calling
cat
)I update
counter
inside the loopI use local variables loot_... for testing, but you can keep the subdirectory in the path if you wish.
Please notice that I have quoted, "...", the variables when used in the script.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
The following shellscript works for me. I have problems with json_pp
, it complains about malformed JSON string, neither array, object, number, string or atom
.
#!/bin/bash
counter=0
while read line;
do
# wget "$line" -O "${line##*/}$((counter++))"
page=$(curl -s "$line")
echo "$page" > "loot_$((counter++))"
done < links.txt
I prefer lower case variables (to decrease the risk of conflict with already existing environment variables).
I redirect from the input file (and avoid calling
cat
)I update
counter
inside the loopI use local variables loot_... for testing, but you can keep the subdirectory in the path if you wish.
Please notice that I have quoted, "...", the variables when used in the script.
add a comment |
up vote
0
down vote
The following shellscript works for me. I have problems with json_pp
, it complains about malformed JSON string, neither array, object, number, string or atom
.
#!/bin/bash
counter=0
while read line;
do
# wget "$line" -O "${line##*/}$((counter++))"
page=$(curl -s "$line")
echo "$page" > "loot_$((counter++))"
done < links.txt
I prefer lower case variables (to decrease the risk of conflict with already existing environment variables).
I redirect from the input file (and avoid calling
cat
)I update
counter
inside the loopI use local variables loot_... for testing, but you can keep the subdirectory in the path if you wish.
Please notice that I have quoted, "...", the variables when used in the script.
add a comment |
up vote
0
down vote
up vote
0
down vote
The following shellscript works for me. I have problems with json_pp
, it complains about malformed JSON string, neither array, object, number, string or atom
.
#!/bin/bash
counter=0
while read line;
do
# wget "$line" -O "${line##*/}$((counter++))"
page=$(curl -s "$line")
echo "$page" > "loot_$((counter++))"
done < links.txt
I prefer lower case variables (to decrease the risk of conflict with already existing environment variables).
I redirect from the input file (and avoid calling
cat
)I update
counter
inside the loopI use local variables loot_... for testing, but you can keep the subdirectory in the path if you wish.
Please notice that I have quoted, "...", the variables when used in the script.
The following shellscript works for me. I have problems with json_pp
, it complains about malformed JSON string, neither array, object, number, string or atom
.
#!/bin/bash
counter=0
while read line;
do
# wget "$line" -O "${line##*/}$((counter++))"
page=$(curl -s "$line")
echo "$page" > "loot_$((counter++))"
done < links.txt
I prefer lower case variables (to decrease the risk of conflict with already existing environment variables).
I redirect from the input file (and avoid calling
cat
)I update
counter
inside the loopI use local variables loot_... for testing, but you can keep the subdirectory in the path if you wish.
Please notice that I have quoted, "...", the variables when used in the script.
answered Nov 28 at 19:43
sudodus
21.9k32871
21.9k32871
add a comment |
add a comment |
Thanks for contributing an answer to Ask Ubuntu!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1096895%2fmy-bash-script-is-not-making-new-files%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Your first example will not work since COUNTER is incremented outside of the while loop. However, it should contain all pages that were brought in by curl. The second loop should produce individual files, one for each page brought in by curl. That should be right. I tested it on my Ubuntu 18.04 system and it worked. I did not have the "| json_pp", but I had everything else.
– Lewis M
Nov 28 at 19:19