How to evaluate the research level of a paper before any publication?












4















There are three differents cases when the research level of a paper must be evaluated before any publication:




  • for author: to choose the most appropriate journal,

  • for referee: to recommend acceptance or rejection for a given journal,

  • for editor: to take the final decision.


If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.



Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?



We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.



An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...



Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.










share|improve this question



























    4















    There are three differents cases when the research level of a paper must be evaluated before any publication:




    • for author: to choose the most appropriate journal,

    • for referee: to recommend acceptance or rejection for a given journal,

    • for editor: to take the final decision.


    If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.



    Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?



    We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.



    An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...



    Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.










    share|improve this question

























      4












      4








      4








      There are three differents cases when the research level of a paper must be evaluated before any publication:




      • for author: to choose the most appropriate journal,

      • for referee: to recommend acceptance or rejection for a given journal,

      • for editor: to take the final decision.


      If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.



      Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?



      We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.



      An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...



      Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.










      share|improve this question














      There are three differents cases when the research level of a paper must be evaluated before any publication:




      • for author: to choose the most appropriate journal,

      • for referee: to recommend acceptance or rejection for a given journal,

      • for editor: to take the final decision.


      If we compare this process to justice, a referee is like a lawyer, and an editor is like a judge.



      Question: How an author, a referee and an editor can (respectively) evaluate the research level of a paper?



      We here ask about the purely research level of a paper, so we assume that the paper is original, correct and well-written. We also assume that the paper is not too specialized if it is a general-audience journal, and is on-topic if it is a specialized journal (idem for any other specificity). Finally, if specific, I am mainly interested in mathematical papers.



      An utilitarianism way could be to estimate how many papers should cite this paper in the next five years (self-citations excepted). Then the author can choose a journal having this number as Article Influence score (after renormalization), and the referee can check if it matches with the chosen journal. But then it would be necessary to know how to make such an estimate...



      Of course, an author/referee/editor can evaluate the paper subjectively, but subjectivity varies with emotions, it can be manipulated and the process can become political. I wonder whether there is an objective way to proceed, or at least, if we can add a bit of rationality in this process. Consider the process of justice, it contains undeniably a part of subjectivity, but also rationality, called the law.







      peer-review journals mathematics paper-submission evaluation






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked yesterday









      Sebastien PalcouxSebastien Palcoux

      1998




      1998






















          2 Answers
          2






          active

          oldest

          votes


















          10














          While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.



          For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."



          If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.



          But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.





          In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger






          share|improve this answer



















          • 2





            Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.

            – Jon Custer
            yesterday






          • 1





            In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"

            – Andreas Blass
            yesterday



















          4














          I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.



          As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...



          It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.






          share|improve this answer



















          • 2





            The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.

            – Bryan Krause
            yesterday






          • 1





            @BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.

            – Michael Schmidt
            yesterday











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "415"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126693%2fhow-to-evaluate-the-research-level-of-a-paper-before-any-publication%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          10














          While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.



          For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."



          If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.



          But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.





          In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger






          share|improve this answer



















          • 2





            Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.

            – Jon Custer
            yesterday






          • 1





            In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"

            – Andreas Blass
            yesterday
















          10














          While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.



          For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."



          If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.



          But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.





          In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger






          share|improve this answer



















          • 2





            Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.

            – Jon Custer
            yesterday






          • 1





            In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"

            – Andreas Blass
            yesterday














          10












          10








          10







          While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.



          For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."



          If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.



          But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.





          In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger






          share|improve this answer













          While I think your "justice" metaphor is wrong, the answer in all cases is experience. You need experience in the field as a reviewer. You need to know what has been done and what is important yet to do. As an editor you need either that field experience or experience with your reviewers - who is trustworthy and who is not. As an author in the early career you don't have any experience other than some within the field, but it will grow.



          For everyone here, the way you get experience is to make best effort attempts at whatever job you have and evaluate the response. As they say in engineering and computer science, "Good design comes from experience. Experience comes from bad design."



          If you are looking for an algorithm, then I would say it doesn't exist. It might be possible in theory to construct one with an AI looking at tens of thousands of interactions, but it might show bias, as many such things have been shown to do.



          But the system as a whole just depends on (nearly) everyone trying to do their best with what they have in front of them in world of imperfect information.





          In theory, theory is the same as practice. But not in practice. - Fnord Bjørnberger







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered yesterday









          BuffyBuffy

          53.6k16173267




          53.6k16173267








          • 2





            Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.

            – Jon Custer
            yesterday






          • 1





            In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"

            – Andreas Blass
            yesterday














          • 2





            Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.

            – Jon Custer
            yesterday






          • 1





            In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"

            – Andreas Blass
            yesterday








          2




          2





          Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.

          – Jon Custer
          yesterday





          Indeed, particularly the bit on algorithms. Another good point would be from Deming (although in my experience most quality people forget this one): just because you can’t measure it doesn’t mean you can’t manage it.

          – Jon Custer
          yesterday




          1




          1





          In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"

          – Andreas Blass
          yesterday





          In the vast majority of cases, what a reviewer needs is exactly what you said, "to know what has been done and what is important yet to do." But I've also encountered a few cases where my reaction to a paper is to forget that and just say "wow, what a great idea!"

          – Andreas Blass
          yesterday











          4














          I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.



          As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...



          It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.






          share|improve this answer



















          • 2





            The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.

            – Bryan Krause
            yesterday






          • 1





            @BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.

            – Michael Schmidt
            yesterday
















          4














          I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.



          As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...



          It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.






          share|improve this answer



















          • 2





            The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.

            – Bryan Krause
            yesterday






          • 1





            @BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.

            – Michael Schmidt
            yesterday














          4












          4








          4







          I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.



          As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...



          It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.






          share|improve this answer













          I think you have a misconception here about the point of the peer-review. The review process is not meant to be a future prediction of how many citations a paper might get. Yet, I have experienced in the past that some journals ask the reviewers, after the review is finished, if a paper should be highlighted in the current issue of the journal or on the journal website.



          As a reviewer, I can make a personal judgement if a paper should be highlighted, but as broad, interdisciplinary and diversified as science has become, I often vote to not highlight a paper, because it would be a subjective judgement. I also leave the decision, if a script fits the scope of a journal often to the editor (and care only about the quality of the reported research), this is literally not my business as an unpaid reviewer and is based on my view, to select a paper for reading by myself by factors not biasing me too much like journal impact factor etc...



          It just follows from scientific history that we often cannot predict the impact of fundamental research. I also think, there is no need to do what you ask or suggest for submitted scripts before publication, because the temporal highlighting of important research is done on conferences and later by the peers and readers in the community via review articles or even blogs, where you have much more, more experienced and objective "judges", than before publication.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered yesterday









          Michael SchmidtMichael Schmidt

          435211




          435211








          • 2





            The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.

            – Bryan Krause
            yesterday






          • 1





            @BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.

            – Michael Schmidt
            yesterday














          • 2





            The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.

            – Bryan Krause
            yesterday






          • 1





            @BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.

            – Michael Schmidt
            yesterday








          2




          2





          The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.

          – Bryan Krause
          yesterday





          The most selective journals definitely review on impact. It is not sufficient to have a 'correct' paper scientifically. Editors can do a lot of that gating themselves, but it's typical for them to ask the reviewers who are more expert in the specific field for their opinions.

          – Bryan Krause
          yesterday




          1




          1





          @BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.

          – Michael Schmidt
          yesterday





          @BryanKrause of course, so it remains a guessing game and also nature, science publish many low-impact papers. But as a reviewer, I don't work and want to work for the business/impact model of a journal, that's my point here. How prestigious journals legitimize their impact and if it is very helpful for the progress of the community or rather creates "citation cirles/cartels" among researchers/groups and hyped trends, is another question.

          – Michael Schmidt
          yesterday


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Academia Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126693%2fhow-to-evaluate-the-research-level-of-a-paper-before-any-publication%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How did Captain America manage to do this?

          迪纳利

          南乌拉尔铁路局