11 March 2012

The Problem with Problem Based Learning

Prompt #5 - “Discuss ill-structured vs well-structured problems.  Refer to Jonassen's (2000) article”

The Problem with Problem-Based Learning -
by Jody Bowie




Jonassen (2000) quotes Gagnéa (1980) as saying “the central point of education is to teach people to think, to use their rational powers, to become better problem solvers” (p.85). This statement resonates with me and follows my own philosophy of education. In fact, this idea is the basis of the parent discipline of the natural sciences: Physics (formerly, Natural Philosophy), which provided a natural fit for me as a teacher (or has the fact that I taught physics shaped my philosophy of education? Maybe this will require further reflection/research.) The greats like Newton and Galileo worked toward an understanding of observable phenomena. They worked within/on ill-structured problems. These phenomena had been observed, e.g. acceleration, gravity, etc., but not explained. These problems (and the way in which these men solved the problems) are still the basis for entry-level science classes of today. Science classes are taught within the historical context of the Journey of the Pillars of Problem Solving.

Background




Problems are an “unknown entity” (2000, p. 65) solved via a “goal-oriented sequence of cognitive actions” (Anderson, 1987, p.250). These problems vary in complexity, domain, and structure. Hopefully, they are presented in varying levels based on age/developmental appropriateness.  Our main focus will be on structure because while Jonassen argues that among the charateristics of problems “... they are neither independent nor equivalent” (p. 66), the structure of the problem is dependent on the other areas, i.e. complexity and abstractedness.

Well-structured problems are formal, domain-specific, have a very well defined initial state, and have a clear solution. Because there is often a single solution nature of well-structured problems, these are relatively easy to assess. They can be assessed in a “mass-gradable” format, e.g. multiple-choice. There is a clear solution to these problems and (hopefully) the teacher knows or, at least, has access to, this solution/answer. Occasionally, there is only one path to the solution and all students draw on the same intellectual skills/processes to arrive at the “destination”. If you can find the answer to your problem in the back of a book (or on Google), you are working on a well-structured problem.

Ill-structured problems are more broad, often cross-disciplinary, may or may not have a well-defined initial state, and do not have a clear, single solution. These problems are much more difficult to assess and may be graded via a “component/skill rubric,” i.e. a rubric with specific components of a concept or skills that need to be assessed individually. These problems might cover a number of skills/ideas and often incorporate seemingly unrelated ideas. However, when students begin to consider the implications (economic, cultural, morality, civil-rights) of their particular solution, these “unrelated ideas” become very relevant. Ill-structured problems have numerous answers (or none) and will likely give students the opportunity to arrive at solutions in through a number of paths (strategies).

To address Cates’ question of “I am also curious as to how everyone feels about the new Core Standards and if it will be easier, or more realistic to incorporate ill-structured problems into classroom instruction?” I’m not sure whether or not it will be “more realistic or easier” but if PAARC is making assessments with ill-structured problems, you can be that teachers had better be exposing students to this type of assessment. Otherwise students’ performance on the assessments will be a disaster. I don’t mean to sound like we should “teach to the test.” However, if our objective is to increase students’ ability to problem solve (authentically) and the assessments are designed as such, our instruction should be driven in those tests. Isn’t that how the objective or learning outcome/assessment relationship is supposed to work? This ties directly to Jonassen’s assertion that two very strong predictors of success in problem-solving is students’ familiarity with problem type and their domain knowledge. If students have sufficient domain knowledge and have some familiarity with the problem type, they will be able to be successful in solving the problem. The reciprocal of this is that successful problem solving should be an indicator within the domain specified by the problem. Students can show mastery (or at least knowledge) of a domain or concept within that domain.

Issues




Based on the title of this post, there should be problem. So where/what is it, you ask? It lies in the planning on the part of the teacher. Teachers (no surprise) are the key to students’ ability to problem solve. (If the rest of this writing sounds like I “know it all,” I do not mean it that way. I’m learning so much about what I did wrong in my classes of the past and I’m doing my best to apply that to my current teaching load.) If teachers rely only on the practice problems in the book, Scantron (or self-grading tests), and pre-made test banks, we will keep getting what we have always gotten, or worse, as shown in the results of the 2009 PISA.Teachers must target their instruction to the needs of the students. How can pre-made materials, test banks, powerpoints, possibly know what your students need to learn, based on their current level of knowledge/skill? I keep thinking over and over in my head as I write, “Set the bar based on the abilities of your current students. Set that bar high. If some make it over, great. Hopefully, everyone else jumped as high as possible.” The fact remains, the bar needs to be adjusted according this current group of students. Likely, that will involve keeping up with current research within a discipline, reframing knowledge within that domain in current cultural and socio-economic lenses. Not only will this allow teachers the ability to always have new problems for students to solve, they will model one of the intended outcomes of PBL: living as a lifelong-learner.

As we discussed during Week 6, many adaptive, stand-alone technologies are emerging in education. These technologies give students a pre-test, identify their weaknesses, and differentiate autonomous instruction to meet the students at their point of need. These technologies assess based on factual information, skill attainment, and/or some analysis. These assessments are based on problems that have a specific answer, likely based on the fact that we have not yet written an algorithm allowing a computer to assess ill-structured problems, due to the nature of those problems. My point here is that many lower-level thinking processes, facts, and skills can be replaced (to some extent) by a program (adaptive technology). This leaves the teacher in the role of lab monitor. While I am not implying that fear of losing our jobs should drive us to enrich our students learning experience through authentic problem-solving experiences, job security is a side-benefit! Assessment of ill-structured problems, at least currently, can only be done by a human, capable of considering all aspects of a students solution and the way in which they arrived at that solution.

Solution




Teachers must continue their learning to be enabled to engage students in ill-structured problems, while still engaging in authentic assessment of students’ problem-solving skills, ability to think critically, and domain knowledge (concept-specific). As a part of this, students should also be assessed on their ability to make cross-disciplinary connections. This can be done easily if other disciplines are brought into the process. For example, Michelle and I are going to be a part of paired class next semester, in which she will teach writing/research (skill) and I will teach technology (skill), through the lens of American History (the context). Finally, these problems should be student-directed. Students should be able to construct their own relevancy/motivation by being allowed to choose a topic that both fits the context of the class and is something in which they are interested. Jonassen suggests that students “...think harder and process more deeply when they are interested...” and “...have high-self efficacy” (p.73) Allowing students to select their own problems enables them to choose those in which they are (or can be) interested and believe they have the ability to solve.



References





  1. Anderson, J. R. (1980). Cognitive psychology and its implications. San Francisco: Freeman.

  2. Gagnéa, R.M. (1980). Learnable aspects of problem solving. Educational Psychologist, 15(2), 84-92.

  3. Jonassen, D.H. (2000). Toward a design theory of problem solving. Educational Technology Research & Development, 48(4), 63-85.

  4. Plekhanov, A. (2011). PISA Results: How does quality of education compare across the EBRD’s countries of operation? Retrieved from http://www.ebrdblog.com/wordpress/2011/03/pisa-results-how-does-quality-of-education-compare-across-the-ebrds-countries-of-operation/

No comments:

Post a Comment