This post is inspired by a recent Orthosphere post on the Turing test as well as the discussion in the comments. I also read Turing's 1950 paper "Computing Machinery and Intelligence" to see how he considered this issue.
The question of whether a machine can think involves two questions. Although these are related, it is worth distinguishing them for the sake of clarity in thinking. The first is the theoretical question: Is it possible for humans (or perhaps some other species) to make a machine that can think? In asking this question, I am using thinking as it is generally understood, in that thinking requires consciousness. Furthermore, it may also be that all consciousness carries with it some degree of free will, so any conscious machine also is has free will and can be autonomous in its actions.
This question has two parts. First, whether it is possible at all. Second, whether any human being will ever be able to figure out how to do so. It may be that there is a method for making conscious artifacts but no human being will ever have the intelligence, creativity, and understanding to discover it. As to whether it is possible at all, a common response is to flippantly say: "Humans are machines and we think, so it must be possible." But this statement already begs the question. It is better to say: "We know that mind and matter can occur together in humans and animals, so it may be possible for artifacts."
As to whether this is actually possible, it's completely unknown. We don't know how mind and matter connect, so we do not know how to bring about such a connection. We do not know what method, if any, would work; however, we can rule out known methods. In particular, computation is not sufficient to bring about consciousness. Computation is simply rule-following; it is lesser than consciousness: a conscious human being can generate computations (by doing an arithmetic problem, for instance), but computation alone does not generate consciousness.
This brings us to the second question, which is the practical issue: to what extent can human beings make machines that can imitate human behavior, regardless of whether the machines are conscious or not?
The answer to this question is also unknown. We do not know the limits of human inventiveness and we do not know all possible methods by which human behaviour might be imitated by machines, so it is not possible to answer the question in general.
By distinguishing these two questions, we can see that there are two distinct approaches to artificial intelligence. Those interested in the first question are primarily those interested in philosophy, in understanding consciousness as it is in itself, not how it can be redefined as part of a current research program.
On the other hand, I would estimate that the majority of AI enthusiasts are primarily interested in the second question. Their goal is to make more powerful computers and to make computers that can perform more tasks. They are not really interested in the philosophical issue.
And this makes sense because the question of consciousness is not directly related to making machines imitate human behavior or increase in computational power. There are animals that live in remote places and hardly interact with humans. These animals are conscious and it may well be that someone discovers a means endow a machine with a consciousness remote from human concerns, as these animals have. Also, consciousness and computational power do not inherently go together.