There are several possible responses to Godel.The implication is that all logical system of any complexity are, by definition, incomplete; each of them contains, at any given time, more true statements than it can possibly prove according to its own defining set of rules.
(1) You can allow statements to not be true or false (the theorem relies on the standard practice of allowing only true or false as truth values). Certainly, in English it's possible to come up with statements that are not completely true or false.
(1 b) Equivalently, you could say that such statements are not "well formed". In English, consider the sentence "It's quicker to go to New York than by train." It's not well-formed enough to say if it is true or false.
(2) You can limit your axiomatic system such that such statements are not valid. Godel applies to a sufficiently powerful system of axioms - i.e., one that allows you to form statements that are essentially of the form "This statement is false."
(3) You can have a system that allows statements and meta-statements about statements in a strict heirarchy, such that only a meta-statement can refer to a statement. With such a heirarchy, statements can not refer to themselves; only a meta-statement can refer to a statement.
(4) You can not worry about it, since the statements that are undecidable are generally self referential; the undecidable statements don't talk about anything other than themselves, and can (generally?) be safely ignored.
This assumes that a computer is representable as a formal system, while a human is not, which is not at all obvious to me. In fact, it looks like begging the question.Gödel's Theorem has been used to argue that a computer can never be as smart as a human being because the extent of its knowledge is limited by a fixed set of axioms, whereas people can discover unexpected truths ...