DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This communication is in response to application No. 18/890,726, filed on 09/19/2024. Claims 1-20 are currently pending and have been examined. Claims 1-20 have been rejected as follows.
Priority
Applicant' s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 06/26/2025 has been acknowledged.
Drawings
The drawings are objected to because it is difficult to differentiate the different lines in Figs. 3-6, thus making the graphs difficult to read. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
In par. 35, “legger robot” should be “legged robot”
In par. 86, “As Table 1 of figure 1B shows” should be “As Table 1 of figure 2B shows”
In par. 147, "one or more memory and/or storage units 420 that stores software 493, operating system 494m information 491 and metadata 492” should be “one or more memory and/or storage units 420 that stores software 493, operating system 494, information 491, and metadata 492”
Appropriate correction is required.
Claim Rejections - 35 USC § 101
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Analysis of the claim(s) regarding subject matter eligibility is described below.
STEP 1: STATUTORY CATEGORIES
Claims 1-17 do fall into at least one of the four statutory subject matter categories.
STEP 2A: JUDICIAL EXCEPTIONS
PRONG 1: RECITATION OF A JUDICIAL EXCEPTION
The independent claims 1 and 13 recite:
Learning using reinforcement learning and by a processing circuit, an action- related corrective policy that once applied reduces a gap associated with an initial simulation state transition function and with a real world state transition function
Determining a control policy of the robot in a simulator, using the action-related corrective policy
These limitations recite an abstract idea belonging to the grouping of mathematical equations and calculations. Learning using reinforcement learning an action-related corrective policy utilizes mathematical formulas and calculations. Determining a control policy using the related corrective policy would also utilize mathematical formulas and calculations, or alternatively could be considered a mental process (evaluating or judging).
PRONG 2: INTEGRATION INTO A PRACTICAL APPLICATION
The additional element(s) recited in the claim(s) beyond the judicial exception are “a processing circuit” in claim 1 and “non-transitory computer readable medium” and “a processing unit” in claim 13. The additional element(s) do not integrate the judicial exception into a practical application because the additional element(s) do not apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The limitations the courts have identified that did not integrate a judicial exception into a practical application include:
• Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f);
• Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, as discussed in MPEP § 2106.05(d);
• Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and
• Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h).
The element(s) “a processing circuit”, “non-transitory computer readable medium”, and “a processing unit” is/are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than instructions to apply the exception using a generic computer component. These additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea without a practical application because they do not impose any meaningful limits on practicing the abstract idea and are at a high level of generality (MPEP 2106.05(f)).
STEP 2B: INVENTIVE CONCEPT/SIGNIFICANTLY MORE
The additional elements recited in the claim(s) are not sufficient to amount to significantly more than the judicial exception because they do not add more than insignificant extra-solution activity to the judicial exception (MPEP 2106.05(g)) and amount to simply adding the equivalent of the words “apply it” with the judicial exception (MPEP 2106.05(f)), as stated above. Further, the additional elements recited in the claim(s) are well-understood, routine, and conventional activities previously known to the industry, specified at a high level of generality (MPEP 2106.05(d)).
Dependent claims 2-12 and 14-17 further define the abstract idea that is present in their independent claims 1 and 13 and thus correspond to an abstract idea for the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the claims 2-12 and 14-17 are not patent-eligible.
Based on the above analysis, claims 1-17 are not eligible subject matter and are rejected under 35 U.S.C 101.
Claim Objections
Claims 1, 5, 13, and 17-18 objected to because of the following informalities:
It is unclear if the “processing circuit” of claim 1 is the same as the “processor” of claim 5.
It is unclear if the “processing unit/processing circuit” of claim 13 is the same as the “processor” of claim 17.
In claim 13, it is unclear if the “processing unit” and “processing circuit” are the same element.
If these all refer to the same element, the naming should be standardized for clarity.
In claims 5 and 17, it is unclear if “the real world state transition policy” is the same as the “real world state transition function” introduced in claims 1 and 13.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 18 and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jang (US 20200333795 A1).
Regarding claim 18, Jang teaches a method for controlling a robot (abstract, "A method for controlling movement of a real object by using an intelligent agent trained in a virtual environment"; par. 54, "The real object may be a drone, an autonomous vehicle, a robot cleaner, or the like"), the method comprising:
sensing information by one or more sensors of the robot (par. 67, “the state monitoring unit 21 may monitor and collect state information (e.g., temperature, position, altitude, direction, speed, rotation, etc.) of the real object located in the real environment and/or state information (e.g., temperature, humidity, wind direction, wind speed, friction, geothermal, etc. measured in the real environment) for the real environment”);
and controlling a movement of the robot, based on the sensed information, by applying a robot control policy learnt using an action-related corrective policy that once applied reduces a gap associated with an initial simulation state transition function and with a real world state transition function (abstract, “determining a first action value for the first state by using the intelligent agent; obtaining a second action value by correcting the first action value so that a state change of the real object coincides with a state change of the virtual object; and inputting the second action value to the real object”; see Fig. 5).
Regarding claim 20, Jang teaches a non-transitory computer readable medium for controlling a robot (abstract, "A method for controlling movement of a real object by using an intelligent agent trained in a virtual environment"; par. 54, "The real object may be a drone, an autonomous vehicle, a robot cleaner, or the like"), the non- transitory computer readable medium stores instruction executable by a processing unit (par. 20, “a memory storing instructions causing the at least one processor to perform at least one step”) for:
sensing information by one or more sensors of the robot (par. 67, “the state monitoring unit 21 may monitor and collect state information (e.g., temperature, position, altitude, direction, speed, rotation, etc.) of the real object located in the real environment and/or state information (e.g., temperature, humidity, wind direction, wind speed, friction, geothermal, etc. measured in the real environment) for the real environment”);
and controlling a movement of the robot, based on the sensed information, by applying a robot control policy learnt using an action-related corrective policy that once applied reduces a gap associated with an initial simulation state transition function and with a real world state transition function (abstract, “determining a first action value for the first state by using the intelligent agent; obtaining a second action value by correcting the first action value so that a state change of the real object coincides with a state change of the virtual object; and inputting the second action value to the real object”; see Fig. 5).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 5-7, 11-14, 17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Benbrahim (Benbrahim H, Franklin JA. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems. 1997;22(3-4):283-302. doi:https://doi.org/10.1016/s0921-8890(97)00043-2).
Regarding claim 1, Jang teaches a method for learning a robot control policy (abstract, "A method for controlling movement of a real object by using an intelligent agent trained in a virtual environment"; par. 54, "The real object may be a drone, an autonomous vehicle, a robot cleaner, or the like"), the method comprising:
learning (par. 18, processor), an action-related corrective policy that once applied reduces a gap associated with an initial simulation state transition function and with a real world state transition function (abstract, “determining a first action value for the first state by using the intelligent agent; obtaining a second action value by correcting the first action value so that a state change of the real object coincides with a state change of the virtual object; and inputting the second action value to the real object”; see Fig. 5);
and determining a control policy of the robot in a simulator, using the action-related corrective policy (see Fig. 5, the robot is controlled using the action a determined by the intelligent agent’s policy as well as the action diff a_diff determined by the additional action prediction model).
Jang fails to explicitly teach the action-related corrective policy is learned using reinforcement learning. Jang teaches the action-related policy uses a pre-trained additional action prediction model (see Fig. 5, additional action prediction model 52 and neural networks 52a and 52b) in order to predict the action difference. However, it would have been prima facie obvious to one of ordinary skill in the art that this model could be trained using reinforcement learning, as reinforcement learning is common in the field. For example, Benbrahim teaches learning a robot control policy using reinforcement learning (see abstract). It would have been obvious that reinforcement learning could have been used in order to determine the action difference.
Regarding claim 2, the combination of Jang in view of Benbrahim teaches the method according to claim 1. Jang further teaches determining the gap by comparing the initial simulation state transition function to the real world state transition function (par. 6, "the compensation value is a value that corrects (or eliminates) an error that may occur between the action of the virtual object located in the virtual environment and the action of the real object located in the real environment, and may be a value for correcting a difference between the virtual environment that the intelligent agent learns and the real environment on which the intelligent agent should make determinations, a modeling error for the virtual object 10 and the virtual environment, and the like").
Regarding claim 5, the combination of Jang in view of Benbrahim teaches the method according to claim 1. Jang further teaches the learning of the action-related corrective policy is preceded by:
learning, by a processor, an initial control policy of the robot in a simulator (Fig. 5, intelligent agent 51 with policy π);
the initial control policy is indicative of the initial simulated state transition function (par. 10, “a method for controlling movement of a real object by using an intelligent agent trained in a virtual environment may comprise determining an initial action value for an initial state of the real object by using an intelligent agent trained in a virtual object simulating the real object in a virtual environment”);
and obtaining real world data associated with an applying of the initial control policy by a real world robot (par. 67, “the state monitoring unit 21 may monitor and collect state information (e.g., temperature, position, altitude, direction, speed, rotation, etc.) of the real object located in the real environment and/or state information (e.g., temperature, humidity, wind direction, wind speed, friction, geothermal, etc. measured in the real environment) for the real environment”);
the real world data is indicative of the real world state transition policy (abstract, “obtaining a second action value by correcting the first action value so that a state change of the real object coincides with a state change of the virtual object”—matches the simulated object state to the real object state).
Regarding claim 6, the combination of Jang in view of Benbrahim teaches the method according to claim 5. Jang further teaches the learning of the initial control policy involves applying reinforcement learning (Fig. 5, intelligent agent 51 with policy π).
While Jang fails to explicitly teach the initial control policy is learned using reinforcement learning, it would have been prima facie obvious to one of ordinary skill in the art that this policy could be determined using reinforcement learning, as reinforcement learning is common in the field. For example, Benbrahim teaches learning a robot control policy using reinforcement learning (see abstract).
Regarding claim 7, the combination of Jang in view of Benbrahim teaches the method according to claim 5. Jang further teaches applying the initial control policy by the real world robot (see Fig. 5, initial control policy π is used to determine how to control the robot).
Regarding claim 11, the combination of Jang in view of Benbrahim teaches the method according to claim 1. Jang fails to explicitly teach the robot is a bipedal robot. However, one of ordinary skill in the art would be able to recognize that the techniques found in Jang would also be applicable to a bipedal robot.
Benbrahim explicitly teaches the robot is a bipedal robot (see abstract).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jang to incorporate the teachings of Benbrahim to make the robot a bipedal robot. Jang teaches a method for controlling a real object, such as “a drone, an autonomous vehicle, a robot cleaner, or the like” (par. 54). Although bipedal robots are not explicitly taught, it would have been an obvious change.
Regarding claim 12, the combination of Jang in view of Benbrahim teaches the method according to claim 1. Jang fails to explicitly teach the robot is a legged robot. However, one of ordinary skill in the art would be able to recognize that the techniques found in Jang would also be applicable to a legged robot.
Benbrahim explicitly teaches the robot is a legged robot (see abstract).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jang to incorporate the teachings of Benbrahim to make the robot a legged robot. Jang teaches a method for controlling a real object, such as “a drone, an autonomous vehicle, a robot cleaner, or the like” (par. 54). Although bipedal robots are not explicitly taught, it would have been an obvious change.
Regarding claim 13, Jang teaches a non-transitory computer readable medium for learning a robot control policy (abstract, "A method for controlling movement of a real object by using an intelligent agent trained in a virtual environment"; par. 54, "The real object may be a drone, an autonomous vehicle, a robot cleaner, or the like"), the non-transitory computer readable medium stores instruction executable by a processing unit (par. 20, “a memory storing instructions causing the at least one processor to perform at least one step”) for:
learning (par. 18, processor), an action-related corrective policy that once applied reduces a gap associated with an initial simulation state transition function and with a real world state transition function (abstract, “determining a first action value for the first state by using the intelligent agent; obtaining a second action value by correcting the first action value so that a state change of the real object coincides with a state change of the virtual object; and inputting the second action value to the real object”; see Fig. 5);
and determining a control policy of the robot in a simulator, using the action-related corrective policy (see Fig. 5, the robot is controlled using the action a determined by the intelligent agent’s policy as well as the action diff a_diff determined by the additional action prediction model).
Jang fails to explicitly teach the action-related corrective policy is learned using reinforcement learning. Jang teaches the action-related policy uses a pre-trained additional action prediction model (see Fig. 5, additional action prediction model 52 and neural networks 52a and 52b) in order to predict the action difference. However, it would have been prima facie obvious to one of ordinary skill in the art that this model could be trained using reinforcement learning, as reinforcement learning is common in the field. For example, Benbrahim teaches learning a robot control policy using reinforcement learning (see abstract). It would have been obvious that reinforcement learning could have been used in order to determine the action difference.
Regarding claim 14, the combination of Jang in view of Benbrahim teaches the non-transitory computer readable medium according to claim 13. Jang further teaches storing instruction executable by a processing unit for determining the gap by comparing the initial simulation state transition function to the real world state transition function (par. 6, "the compensation value is a value that corrects (or eliminates) an error that may occur between the action of the virtual object located in the virtual environment and the action of the real object located in the real environment, and may be a value for correcting a difference between the virtual environment that the intelligent agent learns and the real environment on which the intelligent agent should make determinations, a modeling error for the virtual object 10 and the virtual environment, and the like").
Regarding claim 17, the combination of Jang in view of Benbrahim teaches the non-transitory computer readable medium according to claim 13. Jang further teaches the learning of the action-related corrective policy is preceded by:
learning, by a processor, an initial control policy of the robot in a simulator (Fig. 5, intelligent agent 51 with policy π);
the initial control policy is indicative of the initial simulated state transition function (par. 10, “a method for controlling movement of a real object by using an intelligent agent trained in a virtual environment may comprise determining an initial action value for an initial state of the real object by using an intelligent agent trained in a virtual object simulating the real object in a virtual environment”);
and obtaining real world data associated with an applying of the initial control policy by a real world robot (par. 67, “the state monitoring unit 21 may monitor and collect state information (e.g., temperature, position, altitude, direction, speed, rotation, etc.) of the real object located in the real environment and/or state information (e.g., temperature, humidity, wind direction, wind speed, friction, geothermal, etc. measured in the real environment) for the real environment”);
the real world data is indicative of the real world state transition policy (abstract, “obtaining a second action value by correcting the first action value so that a state change of the real object coincides with a state change of the virtual object”—matches the simulated object state to the real object state).
Regarding claim 19, Jang teaches the method according to claim 18. Jang fails to explicitly teach learning, using reinforcement learning, the action-related corrective policy. Jang teaches the action-related policy uses a pre-trained additional action prediction model (see Fig. 5, additional action prediction model 52 and neural networks 52a and 52b) in order to predict the action difference. However, it would have been prima facie obvious to one of ordinary skill in the art that this model could be trained using reinforcement learning, as reinforcement learning is common in the field. For example, Benbrahim teaches learning a robot control policy using reinforcement learning (see abstract). It would have been obvious that reinforcement learning could have been used in order to determine the action difference.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Benbrahim, and further in view of Myung (US 20250018560 A1).
Regarding claim 8, the combination of Jang in view of Benbrahim teaches the method according to claim 7. Both Jang and Benbrahim fail to teach the applying of the initial control policy by the real world robot is executed in a zero-shot setting.
However, Myung teaches applying of the initial control policy by the real world robot is executed in a zero-shot setting (par. 58 and Fig. 2, zero-shot sim-to-real).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Benbrahim to incorporate the teachings of Myung to add the initial control policy by the real world robot is executed in a zero-shot setting. Doing so would produce a policy that can generalize well in novel situations.
Claim(s) 3 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang in view of Benbrahim, and further in view of Husain (Husain H, Ciosek K, Ryota Tomioka. Regularized Policies are Reward Robust. PMLR. Published online March 18, 2021:64-72. Accessed January 9, 2026. https://proceedings.mlr.press/v130/husain21a.html).
Regarding claim 3, the combination of Jang in view of Benbrahim teaches the method according to claim 1. Both Jang and Benbrahim fail to explicitly teach the learning of the action-related corrective policy comprising applying a corrective reward and a regularization reward. However, Jang teaches an additional action prediction model that predicts an action difference. Benbrahim teaches a reinforcement learning technique that uses rewards. In combination, It would have been prima facie obvious that the additional action prediction model, if trained using reinforcement learning, would have used a reward. Examiner is interpreting a “corrective reward” as a reward based on how close the action difference is to an actual action difference (in other words, a reward based on how correct the correction value is).
Regarding the regularization reward, Husain teaches that regularization rewards are already well-known in the art (see abstract). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Jang in view of Benbrahim to incorporate the teachings of Husain to add a regularization reward in order to prevent over-fitting of the policy.
Regarding claim 15, the combination of Jang in view of Benbrahim teaches the method according to claim 13, Both Jang and Benbrahim fail to explicitly teach the learning of the action-related corrective policy comprising applying a corrective reward and a regularization reward.
However, Jang teaches an additional action prediction model that predicts an action difference. Benbrahim teaches a reinforcement learning technique that uses rewards. In combination, It would have been prima facie obvious that the additional action prediction model, if trained using reinforcement learning, would have used a reward. Examiner is interpreting a “corrective reward” as a reward based on how close the action difference is to an actual action difference (in other words, a reward based on how correct the correction value is).
Regarding the regularization reward, Husain teaches that regularization rewards are already well-known in the art (see abstract). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Jang in view of Benbrahim to incorporate the teachings of Husain to add a regularization reward in order to prevent over-fitting of the policy.
Allowable Subject Matter
Claims 4, 9-10, and 16 would be allowable if rewritten to overcome the rejections under 35 U.S.C. 101 and in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 4, the combination of Jang in view of Benbrahim teaches the method according to claim 1. Both Jang and Benbrahim fail to teach applying the control policy by the real world robot without using the action-related corrective policy. Instead, Jang teaches applying the control policy by the real world robot with using the action-related corrective policy (see Fig. 5).
Claim 16 has allowable subject matter for the same reason as claim 4.
Regarding claim 9, the combination of Jang in view of Benbrahim teaches the method according to claim 1. Both Jang and Benbrahim fail to teach the determining of the control policy comprises fine tuning the initial control policy. Jang does not fine turn the initial control policy (policy π) and only uses the additional action prediction model to determine the correction amount, and applies that correction amount to the policy.
Regarding claim 10, because Jang in view of Benbrahim fails to teach fine tuning the control policy, it also fails to teach the fine tuning comprises using the action-related corrective policy with frozen parameters.
The Examiner further emphasizes the claims as a whole and hereby asserts that the totality of the evidence fails to set forth, either explicitly or implicitly, an appropriate rationale for further modification of the evidence at hand to arrive at the claimed invention. The combination of features as claimed would not have been obvious to one of ordinary skill in the art as combining various references from the totality of the evidence to reach the combination of features as claimed would require a substantial reconstruction of Applicant’s claimed invention relying on improper hindsight bias. It is thereby asserted by the Examiner that, in light of the above and in further deliberation over all of the evidence at hand, that the claims are allowable as the evidence at hand does not anticipate the claims and does not render obvious any further modification of the references to a person of ordinary skill in the art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MINATO LEE HORNER whose telephone number is (571)272-5425. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.L.H./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665