Главная   Заказать работу Research in psychology | реферат


Бесплатные Рефераты, дипломные работы, курсовые работы, доклады - скачать бесплатно Бесплатные Рефераты, дипломные работы, курсовые работы, доклады и т.п - скачать бесплатно.
 Поиск: 


Категории работ:
Рефераты
Дипломные работы
Курсовые работы
Контрольные работы
Доклады
Практические работы
Шпаргалки
Аттестационные работы
Отчеты по практике
Научные работы
Авторефераты
Учебные пособия
Статьи
Книги
Тесты
Лекции
Творческие работы
Презентации
Биографии
Монографии
Методички
Курсы лекций
Лабораторные работы
Задачи
Бизнес Планы
Диссертации
Разработки уроков
Конспекты уроков
Магистерские работы
Конспекты произведений
Анализы учебных пособий
Краткие изложения
Материалы конференций
Сочинения
Эссе
Анализы книг
Топики
Тезисы
Истории болезней
 




Research in psychology - реферат


Категория: Рефераты
Рубрика: Психология
Размер файла: 18 Kb
Количество загрузок:
102
Количество просмотров:
1220
Описание работы: реферат на тему Research in psychology
Подробнее о работе: Читать или Скачать
ВНИМАНИЕ: Администрация сайта не рекомендует использовать бесплатные Рефераты для сдачи преподавателю, чтобы заказать уникальные Рефераты, перейдите по ссылке Заказать Рефераты недорого
Смотреть
Скачать
Заказать



RESEARCH IN PSYCHOLOGY

What counts as ``good quantitative research and what can we say about when to use quantitative and/or qualitative methods?

1. How interpretation enters into inquiry

To set the stage for discussing the scope of ``good quantitative research, I will briefly reconsider the role played by interpretation in the process of inquiry. In my position paper, I argued that when we try to understand psychological phenomena, we have to take as bedrock the practices in which people are engaged. These practices are concretely meaningful in a way that cannot be explained by other, supposedly more basic, terms. I also pointed out that this idea is very closely linked to the view that psychologists themselves are participants in the world of practices. Inquiry in psychology is itself practical activity. As I discussed elsewhere at some length (Westerman, 2004), practices of inquiry in our field are based in part on the ways in which we learn about things in everyday life (e.g., a teacher trying to discover the best way to teach 7-year-olds how to read) and also on the practices in which we participate in our lives in general (e.g., all the practices in the given culture in which reading plays a role). Two points follow from this view of interpretation that will provide the basis for my responses to issues raised in the commentaries. The first point is that research in psychology is irreducibly interpretive. It cannot be a transparent process of learning what human behavior is ``really like in a final sense--the kind of understanding an uninvolved subject might garner from a removed point of view. On this point, all three commentaries at least appear to agree with me. Stiles directly asserts his agreement with this idea, and Dawson et al. and also Stam argue against the notion that research can provide us with a ``view from nowhere.

The second point that follows from what I have said so far concerns what it means to say that research is interpretive. Most often, calls for an interpretive approach to research--for example, by proponents of qualitative methods--emphasize the subjective appreciation of meanings. We see this in the fact that almost all qualitative studies are based on interviews aimed at learning about participants subjective experiences. But this approach also appears when we go beyond interview-based research and consider efforts that emphasize the investigators ``views of the phenomenon of interest, for example, themes they identify in their research.

In contrast to this focus on how we think about or experience things, my understanding of ``interpretation emphasizes how research irreducibly refers to how we do things as participants always already engaged in practical activities. As I discussed in my position paper, my approach centers on the role played by prereflective understanding, or a familiarity with things that is prior to any efforts aimed at thematized knowledge. In our everyday example of figuring out how to teach a class to read, the teachers ``investigation takes place against the background of his or her sense of what counts as progress (e.g., reading with some indications of comprehension, unless this is a class in reading Hebrew aimed largely at preparing students to sound out words in order to recite prayers in the synagogue). This background is not primarily a matter of how the teacher thinks about things. One way to put it is that the relevant background is what comes prior to what the teacher thinks about.1 This point holds for psychological research as well. The process of inquiry is always embedded in our ways of life. Research is indexical in the sense that every aspect of what we do as investigators, including what we take as important problems to explore and what we learn from our inquiries, always refers beyond itself to our prior involvement in the world of practical activities. Although it is not clear to me what Stam meant when he said that both Yanchar and I used the term ``interpretation in two different ways, for me, the use of the term that refers to investigators prior familiarity with practices--which may be what Stam (2006) refers to as a ``rather ordinary use of the term--is the crucial one.

I should note that although Stiles and I agree on many points, my guiding perspective is quite different from his experiential correspondence theory of truth. Stiles focused on what seems to be a subjectivist matching notion: ``A statement is true for you to the extent that your experience of the statement corresponds to your experience of the event (object, state of affairs) that it describes. He talked about good research as inquiry that effectively shares experiences. As I see it, these ideas depart markedly from a view of research as practical activity, although Stiles (footnote 2) also said he agreed with this view. For me, the key criterion of truth is pragmatic (i.e., what works, but taking this in a broad sense that includes whether something we believe we have learned contributes--not necessarily in any simple, direct way at all--to our ways of life) and research, ultimately, is not learning the way things (including my experience of things) are, but an activity that is part of doing things.

2. If not ``real measures, then what?

Stam argued that my view of quantitative research is problematic because such research should be based on ``real measures, that is, assessments that ``refer back to some concrete feature of the world, whereas what I call measurement amounts to nothing more than simply ``assigning numbers to things. As I noted at the outset, Dawson et al. similarly advocated the value of adhering to the classical definition of measurement, although they expressed much more optimism than Stam about the possibility of developing such ``strong measures.

I believe that it is not possible to develop measures that meet the criteria for ``real measures and that we should not aim to develop such measures. These claims follow directly from the first point above about interpretation. All research is interpretive, and this certainly includes the key research process of measurement. While it may or may not be the case that the classical notion of measurement can and should apply in some natural sciences, it does not apply in research in the human sciences. But, we can employ measurement procedures and, therefore, make use of quantification in our investigations-- so long as we understand what we are doing in a novel way, which could be called a different ``theory of measurement.

Here, the second point about interpretation comes into play. We can make use of measurement so long as we recognize that our measures are indexical, that is, interpretive in the sense that they always refer beyond themselves to our prior familiarity with practices. Such measures can be of very different kinds, which, very roughly speaking, mark out a continuum ranging from the very concrete to the obviously meaning-laden. Measures of decibel levels lie quite far to the ``concrete end of this continuum, the coding category ``yells moves away from that end, and global ratings of ``behaves in a hostile manner lie well to the ``meaning-laden end. Note, however, that because all measures are indexical, all points along this continuum are ultimately both concrete and meaningful. They are all examples of phenomena of interest that, in varying ways, concretely specify the phenomena but at the same time reflect the fact that the concrete specifications are never exhaustive. A measure on the concrete end of this continuum based on decibel levels might be the dependent variable in an experimental paradigm within which high-decibel verbalizations are examples of angry behavior. At the other end of the continuum, global ratings will be based on a manual that uses concrete examples to define the phenomenon of interest.

Given this ``theory of measurement, I think that it is misleading to say--as Stam and Dawson et al. claimed--that I call for ``weak measures rather than ``strong ones. As I see it, I am offering a different framework that incorporates many measures that might well be called ``strong measures (those near the ``concrete end of the continuum), although they do not conform to the classical definition of measurement. Stam argued that my position would lead to confusion in the field. He characterized it as calling for an ``arbitrary process of assigning numbers to events, asked us to ``imagine a world where we each developed our own measures of length or temperature, and cited the multitude of personality measures that exist as an example of how things have, in fact, already gotten out of hand. I do not find these arguments convincing. For one thing, any concern about diversity of viewpoints surely holds at least as clearly for qualitative research, which Stam supports. Moreover, while I agree with Stam that cooperation in the field is desirable, I do not believe my position works against it. My second point about interpretation is relevant here. I am not advocating inquiry that is interpretive in the sense that it is based on however an investigator happens to think about things. As I suggested in my position paper, practices of inquiry are relative to investigators prereflective understanding, but they are not arbitrary. Interpretive inquiry does not lead to a problematic free-for-all by any means. Only certain ways of proceeding will prove to be useful for people who are participants in a shared world of practical activities (see Sugarman & Martin, 2005). Furthermore, investigators can make their procedures public and cooperate in using one set of measures when that seems useful in a given situation, even if they can never fully explicate the procedures because they always refer beyond themselves to the background of the shared world. It is true that there is likely to be a diversity of approaches to any given issue, but this is desirable. Diversity in measures and other research procedures often is a function of differences in research goals (see Westerman, 2004, p. 137). Fundamentally, diversity in approaches is both good and necessary because investigators in psychology address issues that do not have final, determinate answers.

3. How is interpretive quantitative research helpful?

Even if employing interpretive quantitative measures does not have the downside of leading to a confusing free-for-all, we can still ask, along with Stam, whether there is something to be gained by using numbers in our investigations. As I pointed out in my position paper, I agree with researchers who embrace positivism about some of the useful features of quantitative measures and quantitative research procedures in general (e.g., they enhance our ability to investigate group differences without being unduly influenced by dramatic instances of a phenomenon). I want to mark out an additional basis for appreciating what quantitative methods have to offer.

In my position paper, I argued that quantitative research procedures can make a special contribution because they require us to concretely specify our ideas about psychological phenomena. I endorsed such measurement procedures as relational coding, which could be called ``soft measurement, but I also discussed how what could be considered ``strong measures and related quantitative procedures (e.g., coding discrete behaviors, conducting experiments) also offer useful ways to concretely specify phenomena of interest--although I argued for reconceptualizing these methods as interpretive procedures and recognizing that they do not exhaustively specify the constructs and processes under investigation. Now, I want to extend my analysis of the ways such ``apparently strong measures and procedures can be extremely helpful. To begin with, ``apparently strong procedures can be highly informative about particular situations that are of interest in connection with particular applied problems.

For example, consider Woods (e.g., Wood & Middleton, 1975) paradigm for examining how mothers scaffold their childrens attempts to learn how to build a block puzzle, which I referred to in my position paper. That paradigm includes a clearly delineated procedure for identifying the specificity of parental bids at guiding a child. Although the goal is to explore a relational process (i.e., do mothers home in and out contingently as a function of the childs moment-to-moment success), the specificity measure does not rely on relational coding. Instead, each bid is coded based on its own properties. Investigating parent-child interaction in this specific situation has been shown to have applied utility. In a study I conducted (Westerman, 1990), assessments of maternal behavior in the context of Woods paradigm discriminated between mother-preschooler dyads with and without compliance problems. In an experimental study, Strand (2002) found that teaching mothers to home in and out when they show their children how to build Woods puzzle leads to greater child compliance in a separate context.

``Apparently strong quantitative methods also can lead to the discovery that specific, concrete forms play a role in many situations, not just the original measurement context. For example, Strand (2002) found that the specificity scale was useful when applied to a task other than Woods block puzzle. Similarly, we might find that measures which were initially employed in particular structured observation contexts, say a measure of verbal aggression based on decibel levels or, more likely, a measure of activation in a certain part of the brain, identify specific concrete forms that play a particular role quite generally. Merleau-Ponty (1962) used the term ``sediment to refer to concrete forms of this sort. Sediment often plays a part in psychological phenomena, and ``apparently strong quantitative procedures can be very helpful because they enable us to learn about these aspects of practical activity.

Two qualifications are in order, however. First, even when one aspect of a phenomenon of interest typically takes a specific concrete form, we need to recognize that it is part of a larger, meaningful process. For example, even if Woods specificity scale worked in all contexts--which is extremely unlikely--it would be crucial to appreciate the role that the specificity of maternal directives plays as part of doing something, that is, teaching a child. It is not specificity per se, but the modulation of maternal efforts as a function of the childs success at what he or she is doing that is crucial. The second qualification is that there are always limits to the ways in which specific concrete contents function in a particular manner. It is useful to discover that a certain area of the brain typically functions in a particular way as part of what a person is doing, but another area might play this role under particular circumstances, perhaps due to brain plasticity. ``Apparently strong quantitative studies can be helpful here, too, because they are useful for marking out the relevant limits.

Such research has other benefits that can be considered the flipside of the advantages I have mentioned so far. Studies employing ``apparently strong quantitative procedures can help us understand psychological phenomena in terms of richly generative principles, because quantitative measures such as discrete behavior codes provide concrete examples of meaningful constructs and quantitative procedures like experiments constitute concrete examples of meaningful processes. For example, research employing Wood paradigm suggests the general principles that ``homing in and out is a crucial feature of parenting and that this process refers to modulating the specificity of parental bids. ``Apparently strong quantitative methods are well suited for investigating these claims. We might well find, for example, that Woods specificity scale itself is relevant only in a few contexts, but that some other concrete characterization of modulating specificity is on the mark quite generally. Alternatively, we might find that, however understood, modulation of specificity has limited relevance, but that ``homing in and out captures an important process if we define it in concrete ways that share in common contingently providing more or less help. ``Apparently strong quantitative research is very useful for learning about general principles because these principles are concretely meaningful; they are not abstract ideas. Even if we somehow knew beforehand that a given principle was true (which, of course, is never the case), we would not know what it actually means because there is no transparent mapping from the principle to concrete events. ``Apparently strong quantitative research procedures would help us greatly in this hypothetical situation, and they help us all the more in real research situations in which we simultaneously must learn the principles and what they mean concretely.

4. Its ``good quantitative research and its interpretive

Studies by Fischer and his colleagues (e.g., Fischer, 1980; Fischer & Bidell, 1998) and Dawson (2006) have investigated development in a wide range of domains, including among many others, understanding of social interaction concepts such as ``nice and ``mean, skills in mathematics, and understanding ``leadership. This research has provided a great deal of support for a clearly delineated 13-level developmental sequence in complexity ranging from reflexive actions to understanding principles. Dawson et al. (2006) claimed that it demonstrates the value of ``strong, positivist quantitative methods, which, they say, are excluded in the approach to quantitative research I offered in my position paper. In particular, they argued that their work provides a ``developmental ruler that represents a universal, content-independent measure of increasing hierarchical integration.

Do their examples show that Stam was off the mark when he argued that, although such research is highly desirable, it is something that we see rarely if at all in the field? Do the examples demonstrate that my approach fails to incorporate an important range of research efforts?

In fact, I believe that this research offers us excellent examples of ``good quantitative research. I disagree with Dawson et al.s characterizations of their own research, however. As I see it, the research in question is a fascinating example of one of the situations I described earlier: the case in which initial findings in a particular domain or a few domains suggest a general principle. In particular, in this situation, the general principle is the developmental sequence of hierarchical integration. There is a real risk here (given our philosophical tradition) of imagining that this sequence is a fully abstract, reified structure that ``lies behind concrete phenomena and failing to recognize the ways in which interpretation enters into the research.

The studies by Fischer, Dawson, and their colleagues employ measures that are extremely useful, but not ``strong in the positivist sense marked out by classical notions of measurement or Stams idea about measures that ``refer back to some concrete feature of the world. Consider examples from Dawsons (2006) LecticalTM Assessment System. In that system, a childs understanding is said to be at the level of single representations if the child offers a statement like ``Camping is fun in an assessment interview. By contrast, the childs understanding would be at the higher level of representational mappings if he or she employed an expression describing a ``linear relationship, such as ``If you dont do what your father tells you to do, he will be really mad at you. But determining the level of such responses is by no means a transparent process. For one thing, there is no one-to-one relationship between developmental level and form of speech. A child might say, ``If I go camping, I have fun and still be at the level of single representations, if the statement really boils down to ``Camping is fun because the child cannot actually coordinate relevant single representations in a mapping relationship. Dawson (2006) herself noted that meaning is ``central to the scoring procedure and gave an example concerning the interview question, ``Could you have a good life without having had a good education? In this example, a rater found it difficult to score a response that included the word ``richer because it was not clear whether this word referred to having more money or having a life with broader/deeper significance.

Dawson (2006) claimed that the ``developmental ruler provides a way to ``look through content in assessments of structure, but in my view, the brief remarks I have just offered point to a crucial sense in which hierarchical integration is a concretely meaningful idea. When we apply the developmental ruler to a new domain, we have to discover not only whether the developmental sequence holds in that domain but also what counts as single representations, representational mappings, and so forth in this context. The ``ruler provides us with valuable ideas about how to think about complexity, but in itself it is empty. To use it, investigators have to proceed with the crucial steps of designing an assessment procedure and preparing scoring manuals for each domain. These steps reflect the investigators rich appreciation of the concretely meaningful practices in the domain, including what kinds of connections can obtain within this range of phenomena. This rich appreciation is largely prereflective understanding. Hence, the procedures and scoring manuals for each domain play a truly central role that is not ``given by the general principles. Furthermore, they do not offer exhaustive concrete specifications of the phenomena of interest. Raters have to draw on their own prior familiarity with the way things work.

Some further comments are in order concerning the fact that most or all of the studies under consideration were based on assessing individuals developmental level in a structured interview or by means of some other similar structured procedure. These assessments provide measures that have considerable precision. Moreover, they unquestionably tap important skills. Nevertheless, we should recognize that such investigations differ from other possible studies that would examine what an individual does when he or she is engaged in ongoing activities. For example, consider ``leadership, one of the research areas discussed by Dawson et al. Dawson (2006) described a carefully developed system for assessing a subjects level of understanding this concept from responses during an assessment interview. But instead of proceeding this way, an investigator could examine what a subject does upon encountering a particular item while going through his or her ``inbox or when a subordinate asks the subject a specific question in a naturally occurring situation. Thinking about such in situ examples makes it clear that we could not possibly map what a person might do in such situations onto the developmental sequence without drawing on rich prior familiarity with the relevant practices. Therefore, these examples underscore the role played by interpretation. Looking at this matter the other way, the in situ examples help us to see how much actually is involved when we do use the structured assessment procedures that these investigators have so successfully developed. A great wealth of interpretive appreciation of the phenomena is concretized in those measurement procedures.

The in situ examples also raise a new issue: is the 13-level sequence relevant for some or all naturally occurring situations, or is its relevance limited to the kind of skills that can be assessed in the particular ways typically employed in the research in question, which could be called skills at understanding of a more reflective sort? I am not asserting that the complexity sequence would not hold for a broad range of skills involving in situ behavior. I only wish to point out that the sequence might be limited in these ways. The work is interpretive. It is based on procedures that provide concrete examples of certain meaningful phenomena. Therefore, we can ask whether the assessments made in these investigations actually serve as concrete examples of clearly in situ psychological phenomena and, more generally, we can ask what is the range of phenomena that are successfully tapped by the structured assessments. None of this is to argue against the value of this research. It is possible for raters to draw upon their prereflective understanding and employ the carefully developed manuals and the developmental model to assess complexity levels. Furthermore, it is of great interest that research efforts along these levels have demonstrated that the developmental sequence holds in many different areas when skills are assessed using the kinds of procedures that have been employed. In sum, I believe that the research by Dawson, Fischer, and their colleagues represents examples of excellent, ``apparently strong quantitative research.

5. Caveats concerning possible pitfalls

In my position paper, I made several specific suggestions about how researchers should change the ways they use quantitative methods. For example, I argued for using relational codes instead of always coding discrete behaviors. It should now be clear that my comments along those lines were misleading if they suggested that I believe certain quantitative methods (e.g., discrete behavior codes) are always problematic--or, if they suggested that I wanted to rule out quite generally what others might call ``strong quantitative methods. According to my approach, ``good quantitative research includes many examples of what others consider ``strong methods in addition to many examples of ``soft methods.

Notwithstanding possible confusion, at this juncture, I also want to state that this does not amount to wholesale approval of all quantitative research. I agree with Stam that there are real dangers in what he calls ``Pythagoreanism. Quantitative methods frequently are employed in a problematic manner. In my opinion, this occurs when they are used in such a way that they cannot serve their interpretive function. For example, measures of decibel levels are likely to fail at assessing angry behavior if the vocalizations in question do not occur in a structured situation in which loudness serves as a concrete example of such behavior (this is circular, and that is the point). In general, quantitative methods are unhelpful in a particular case insofar as they are actually used in a way that conforms to traditional positivist conceptualizations about ``real measures and the like. Hence, I hope that my position paper and this rejoinder serve to mark out a view about how to employ quantitative methods and how not to employ them, rather than a position about which quantitative methods we should use.

Some brief comments are in order about techniques for statistical analysis. In large measure, I agree with the cautionary remarks Stiles offered in his commentary about ``high end statistics. How researchers use these techniques very often reflects what I view as a misguided understanding of quantitative research. As Stiles noted, these sophisticated techniques are often applied to decontextualized variables. One of the examples I mentioned in my position paper is relevant here. I argued that investigators studying parent-child interaction from a social learning theory vantage point attempt to explain interactions by breaking them down into isolable, elemental behaviors (e.g., prosocial child behavior, parental praise) instead of taking as their starting point what parent and child are doing together. These researchers then try to put together an account of the exchanges by statistically examining sequential dependencies between these putative building block behaviors. In my opinion, there is a great deal that cannot be recovered about the interactions when we proceed in this way, no matter how sophisticated we may get at looking at dependencies across multiple lags. At the same time, I also agree with Stiles when he urges us not to throw out inferential statistics because of its historical association with misguided notions about methodology. Notwithstanding Stams interesting points about longstanding problems that remain unresolved in the logic of hypothesis testing, I think these techniques can be useful. But I do wonder whether something could be gained by reexamining the assumptions of the statistical procedures we employ and considering whether some other analytic techniques are called for in light of the approach to quantitative research I have offered.

I would like to underscore one point from my position paper that is highly relevant when it comes to pitfalls associated with quantitative research. I believe that theory plays an extremely important role in whether quantitative researchers proceed in ways that are truly useful. In particular, I believe that researchers are likely to conduct ``good quantitative studies if they are guided by theories that are based on the idea that people are always already involved in practical activities in the world. In my position paper, I briefly discussed my participatory approach, which is an attempt to mark out a general framework for theories of this sort (also see Westerman & Steen, in press). I also gave the example of scaffolding research and pointed out that investigators in that area--in contrast to social learning theory researchers--often use relational codes rather than discrete behavior coding. In this rejoinder, I noted that when Wood did code discrete behaviors in his investigations of scaffolding (e.g., Wood & Middleton, 1975), he did so in a way (his specificity scale) that still made it possible to examine what parent and child were doing (i.e., the parent was attempting to teach the child how to build the puzzle) instead of breaking down what they were doing into isolable behaviors (e.g., prosocial behaviors, praise). He even examined sequential dependencies in a simple statistical way to study maternal homing in and out (also see Westerman, 1990), which suggests that statistical analyses, too, are likely to be useful so long as a study is based on helpful theory.

6. Concluding remarks

I have attempted to offer a reconceptualization of quantitative procedures that is much more focused on how we should employ these procedures than on endorsing some of these methods over others. This reconceptualization also puts the distinction between quantitative and qualitative research in a new light. There are differences between the two kinds of research--for example, quantitative research directs more attention to concretely specifying phenomena--but the contrast is less fundamental than most researchers think. From my vantage point, both types of research are aimed at learning about concretely meaningful practices and both are pursued by investigators who are themselves participants in the world of practices.

In their commentary, Dawson et al. suggested that my view is a transitional one because, while it attempts to integrate quantitative and qualitative methods, it comes down on the side of interpretation, privileges qualitative research over quantitative, and excludes positivist approaches. They claimed that their problem-focused methodological pluralism represents a fully integrative model because it includes both positivist and what they call post-positivist approaches. In my opinion, it is the other way around. I believe that in order to integrate the two types of research we need to incorporate all useful examples of both types of work in a new overarching framework that differs from the notions that typically have served to guide each kind of inquiry in the past. As I see it, Dawson et al.s position is a transitional attempt at integration because it does not go beyond calling for blending the two approaches and their guiding viewpoints. Remarks Yanchar offered in his position paper about mixed-model approaches very effectively present the problems with this strategy for integration (also see Yanchar & Williams, in press). By contrast, I believe that my approach offers the requisite appropriately inclusive overarching framework, which itself is derived from a hermeneutic perspective based on practices. In particular, in this rejoinder, I have tried to show that my approach does not exclude what others would call ``strong quantitative procedures. In addition, my approach does not subordinate this type of quantitative research to ``soft quantitative research, nor does it lead to subordinating quantitative research to qualitative. I believe that all of these research endeavors represent ways of understanding concretely meaningful phenomena while they differ in the degree to which they focus on concretely specifying those phenomena versus characterizing them in meaning-laden terms. All, however, are interpretive.

I will conclude with some comments on a related issue: what can we say about when to use quantitative and/or qualitative approaches? All three commentaries include the idea that choice of methods should depend on the research problem at hand. I agree with this viewpoint. In fact, I believe it is another example of the limits of inquiry, a notion that is central to my perspective. General considerations can only provide what might be called an ``outer envelope for thinking about how to proceed in any given research situation. This outer envelope tells us that we need to find some interpretive method for investigating the phenomenon of interest, that the phenomenon is concretely meaningful in nature, and that the challenge is to find a method or set of methods that is appropriate for this particular problem given where the possible methods fall along a continuum that ranges from the concrete to the meaning laden--although all points along this continuum have concrete and meaningful aspects. Beyond this, however, we must decide just how to explore the particular research problem at hand as investigators who ultimately pursue our investigations--as Dawson et al. said--in medias res.

References

1. Dawson, T.L. (2006). The Lectical TM Assessment System. Retrieved September 26, 2006, from hhttp:// www.lectica.infoi.

2. Dawson, T.L., Fischer, K. W., & Stein, Z. (2006). Reconsidering qualitative and quantitative research approaches: A cognitive developmental perspective. New Ideas in Psychology, 24, 229-239.

3. Fischer, K.W. (1980). A theory of cognitive development: The control and construction of hierarchies of skills. Psychological Review, 87, 477-531.

4. Fischer, K.W., & Bidell, T.R. (1998). Dynamic development of psychological structures in action and thought. In W. Damon (Series Ed.) & R. M. Lerner (vol. Ed.), Handbook of child psychology (vol. 1): Theoretical models of human development (5th ed., pp. 467-561). New York: Wiley.

5. Merleau-Ponty, M. (1962). Phenomenology of perception (C. Smith, Trans.). London: Routledge and Kegan Paul.

6. Stam, H.J. (2006). Pythagoreanism, meaning and the appeal to number. New Ideas in Psychology, 24, 240-251.

7. Stiles, W.B. (2006). Numbers can be enriching. New Ideas in Psychology, 24, 252-262.

8. Strand, P.S. (2002). Coordination of maternal directives with preschoolers behavior: Influence of maternal coordination training on dyadic activity and child compliance. Journal of Clinical Child Psychology, 31, 6-15.

9. Sugarman, J., & Martin, J. (2005). Toward an alternative psychology. In B. D. Slife, J.S. Reber, & F.C. Richardson (Eds.), Critical thinking about psychology: Hidden assumptions and plausible alternatives (pp. 251-266). Washington, DC: APA Books.

10. Westerman, M.A. (1990). Coordination of maternal directives with preschoolers behavior in compliance problem and healthy dyads. Developmental Psychology, 26, 621-630.

11. Westerman, M.A. (2004). Theory and research on practices, theory and research as practices: Hermeneutics and psychological inquiry. Journal of Theoretical and Philosophical Psychology, 24, 123-156.

12. Westerman, M.A. (2006). Quantitative research as an interpretive enterprise: The mostly unacknowledged role of interpretation in research efforts and suggestions for explicitly interpretive quantitative investigations. New Ideas in Psychology, 24, 189-211.

13. Westerman, M.A., & Steen, E. M. (in press). Going beyond the internal-external dichotomy in clinical psychology: The theory of interpersonal defense as an example of a participatory model. Theory & Psychology.

14. Wood, D., & Middleton, D. (1975). A study of assisted problem-solving. British Journal of Psychology, 66, 181-191.

15. Yanchar, S.C. (2006). On the possibility of contextual-quantitative inquiry. New Ideas in Psychology, 24, 212-228.

16. Yanchar, S.C., & Williams, D.D. (in press). Reconsidering the compatibility thesis and eclecticism: Five proposed guidelines for method use. Educational Researcher.












 
Показывать только:
Портфель:
Выбранных работ  



Рубрики по алфавиту:
А Б В Г Д Е Ж З
И Й К Л М Н О П
Р С Т У Ф Х Ц Ч
Ш Щ Ъ Ы Ь Э Ю Я

 

 

Ключевые слова страницы: Research in psychology | реферат

СтудентБанк.ру © 2017 - Банк рефератов, база студенческих работ, курсовых и дипломных работ, шпаргалок и докладов по различным дисциплинам, а также отчеты по практике и многое другое - бесплатно.
Лучшие лицензионные казино с выводом денег