DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the application filed on February 2nd, 2024 and the response and amendments filed 10/23/2025.
Claims 1-14 have been amended.
Claims 15-17 have been added.
No claims have been cancelled.
Claims 1-17 are currently pending and have been examined.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement(s) (IDS(s)) submitted on 2/2/2024, 6/21/2024, and 1/27/2026 have been received and considered.
Response to Amendment
Applicant’s amendments to the Abstract, Specification, Drawings, and Claims have overcome each and every objection and the previous 101 rejection previously set forth in the Non-Final Office Action mailed 8/01/2025. However, the amendments have necessitated a similar but new ground of 101 rejection.
Response to Arguments
Applicant’s arguments, see pages 12 & 13, filed 10/23/2025, with respect to the rejection(s) of claim(s) Claims 1-14 under 35 USC 103 have been fully considered and are persuasive regarding the prior art not teaching “wherein the target person being a person satisfying a predetermined detection condition for detecting a visitor to the factory, and the predetermined detection condition including at least one of a condition that the person wears an article associated with visitors or a condition related to a degree of match between the person and a feature in a face of a visitor registered in advance.” Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Matsumura (JP 2008065381), Nordbruch (US 20170320529), and Okada (US 20220300012), as necessitated by amendment.
Applicant’s arguments, see page 9, filed 10/23/2025, with respect to the objection to Claim 5 as a separate and distinct embodiment has been fully considered and is not persuasive. The objection stems from the presentation of Claim 5 with the structure and preamble of an independent claim and the additional language of a dependent claim resulting in a distinct embodiment. Therefore, the objection is maintained.
Claim Objections
Claim 5 recites “A system” and introduces a new embodiment; therefore, it is an independent claim. However, language such as “as according to claim 1” is indicative of dependent-type claims in the new “System” embodiment. Since claim 5 explicitly recites “the controller” embodiment, it is considered a separate and distinct embodiment from the “System.”
Claim 8 recites in the newly amended text “the captured image an input image,” which appears to be either redundant or missing a connective word. For examination purposes, claim 8 will be interpreted as redundant.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-13 and 15-17 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to a computer readable medium. As explained in U.S. Patent & Trademark Office, Subject Matter Eligibility of Computer-Readable Media, 1351 Off. Gaz. Pat. Office 212 (Feb. 23, 2010):
The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zietz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. § 101, Aug. 24, 2009; p. 2.
The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. § 101 as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 U.S.C. § 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation "non-transitory" to the claim. Cf Animals - Patentability, 1077 Off Gaz. Pat. Office 24 (April 21, 1987) (suggesting that applicants add the limitation "non-human" to a claim covering a multi¬ cellular organism to avoid a rejection under 35 U.S.C. § 101). Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See, e.g., Gentry Gallery, Inc. v. Berkline Corp., 134 F.3d 1473 (Fed. Cir. 1998).
Accordingly, claim(s) 1 is directed towards “a storage unit storing a program,” which covers both transitory and non-transitory embodiments and is therefore rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter. Claims 2-13 and 15-17 are similarly rejected as being dependent upon the rejected claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-7, 9-10, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Matsumura (JP 2008065381, hereinafter “Matsumura,” all citations and excerpts taken from the attached machine translation) in view of Nordbruch (US 20170320529, hereinafter “Nordbruch”) and Okada et al (US 20220300012, hereinafter “Okada”).
Regarding Claim 1, Matsumura teaches:
A controller used in a system that moves a moving object by remote control, (Matsumura Pg 3 ¶ 8 “The system further comprises remote vehicle control means for moving the vehicle to the destination based on the destination instruction received by the destination instruction receiving means. Thereby, the safety of the user who is operating the vehicle remotely can be ensured;”)
the controller comprising: a storage unit storing a program; and one or more processors, wherein the one or more processors are configured to execute the program stored in the storage unit to: (Matsumura Pg 4 ¶ 10-Pg 5 ¶ 1 “FIG. 2 is an enlarged functional block diagram of the destination instruction transmission means 80. The destination instruction transmission means 80 includes a communication unit 81, an input processing unit 83, an information output unit 84, and an input unit 85 as main components. An object position detector 86 may be further provided as necessary. The communication unit 81, the input processing unit 83, and the information output unit 84 may be realized by a microcomputer;”)
detect a target person in a factory using a first sensor provided in the system, […] (Matsumura Pg 2 ¶ 6 “Object position detecting means for detecting positions of objects around the user based on the position of the user detected by the user position detecting means;” object position detecting being analogous to determination of whereabouts, and the user being the target person whose position is detected)
[…] as a result of detecting the target person, determine whereabouts of the target person on the basis of a detection result of the target person; (Matsumura Pg 6 ¶ 7 lines 4-6 “Further, if a user is imaged by a camera and pattern matching or the like is performed, the presence of the user can be recognized, and the position can be calculated in consideration of the camera angle and the like,”)
identify the moving object as being controlled by the remote control (Matsumura Pg 2 ¶ 6-7 “Object position detecting means for detecting positions of objects around the user based on the position of the user detected by the user position detecting means. Based on the relative positions and relative speeds calculated from the positions of the user and the objects around the user detected by the user position detection means and the object position detection means, there is a danger that the user and the surrounding objects may collide. A dangerous state determination means for determining whether or not a state is present;” and Pg 3 ¶ 10 lines 1-3 “The danger notification means changes the contents of the remote vehicle control when the dangerous state determination means determines that the user is in a dangerous state when the remote vehicle control means is performing the remote vehicle control,”)
[and likely to approach] the target person on the basis of the determined whereabouts; (Matsumura Pg 7 ¶ 3 lines 1-4 “The dangerous state determination means 30 calculates a relative position and a relative speed based on both the user position detected by the user position detection means 10 and the positions detected by the object position detection means 20, and based on this, the user It is determined whether or not the surrounding object is in a dangerous state where it can collide,” teaching the likelihood of a detected object (but not specifically the controlled vehicle) colliding with the user)
and transmit a control signal (Matsumura Pg 4 ¶ 10 lines 1-3 “The destination instruction transmission means 80 includes a communication unit 81, […] as main components,”)
to the moving object, (Matsumura Pg 9 ¶ 2 lines 6-10 “Conversely, the vehicle may change the course to the right so as to indicate a direction in which danger is avoided, and take a course such as D. In the constituent elements of FIG. 1, such a danger notice controls the remote vehicle control means 70 based on the danger notice signal from the danger notice means 40, and the remote vehicle control means 70 controls the steering ECU 72 to control the steering device 75. This can be realized by steering and changing the course,” teaching a control signal for one means of alert transmitted to the vehicle from the remote control means.)
the control signal being a signal for changing a driving mode of the moving object to an alert mode for giving an alert to the target person. (Matsumura Pg 8 ¶ 4 lines 1-3 “The danger notification unit 40 notifies the user of the danger when the danger state determination unit 30 determines that the user is in a dangerous state. As a notification method, sound output by sound output means 41 such as a horn and a speaker including a buzzer, light of a headlight 42,”)
Matsumura does not teach:
[…] wherein the target person being a person satisfying a predetermined detection condition for detecting a visitor to the factory, and the predetermined detection condition including at least one of a condition that the person wears an article associated with visitors or a condition related to a degree of match between the person and a feature in a face of a visitor registered in advance; […]
[…the moving object being controlled by the remote control] and likely to approach […]
Within the same field of endeavor as Matsumura, Nordbruch teaches:
[…the moving object being controlled by the remote control] and likely to approach […] (Nordbruch ¶ 0040 “The data which are relevant for the autonomous driving operation are, for example, the following data, individually or in combination: map data from a digital map, position data about one or multiple stationary object(s) which is/are located within the manufacturing system, position data about one or multiple mobile object (s) which is/are located within the manufacturing system, prediction data with regard to a future movement of one or multiple of the mobile object(s) which is/are located within the manufacturing system, target position data for one or multiple target(s) which the vehicle is supposed to drive to autonomously. A target of this type is, for example, a position or a location of an assembly station, a test facility, or an end of the assembly line or a parking facility or a parking position in a parking facility,” emphasis added, teaching prediction of the route of the remote-controlled vehicle in addition to the sensed objects)
Matsumura and Nordbruch are considered analogous because they both relate to remote vehicle control. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura with the simple substitution of the other object for Nordbruch’s remote vehicle target position data and prediction data of future movement to arrive at the combination of Matsumura’s collision prediction of the user with Nordbruch’s prediction of future remote controlled vehicle state to predict a collision between the user and the remote vehicle. This modification would be made with a reasonable expectation of success as motivated by protecting the user from the remote vehicle in case of distraction or error.
The combination of Matsumura and Nordbruch does not teach:
[…] wherein the target person being a person satisfying a predetermined detection condition for detecting a visitor to the factory, and the predetermined detection condition including at least one of a condition that the person wears an article associated with visitors or a condition related to a degree of match between the person and a feature in a face of a visitor registered in advance; […]
Within the same field of endeavor as Matsumura and Nordbruch, Okada teaches:
[…] wherein the target person being a person satisfying a predetermined detection condition for detecting a visitor to the factory, and the predetermined detection condition including at least one of a condition that the person wears an article associated with visitors or a condition related to a degree of match between the person and a feature in a face (Okada ¶ 0056 lines 1-10 “The visitor identifying unit 454 identifies a visitor who has visited the showroom 1 from the in-showroom image that has been acquired by the in-facility image acquisition unit 451. For example, the visitor identifying unit 454 extracts a person from the in-showroom image, and further extracts (recognizes) a face image from the person that has been extracted. Then, the visitor identifying unit 454 searches the visitor data stored in the visitor database 443 for visitor data having a face image that matches the face image that has been extracted, and identifies the visitor,” teaching identification of visitors in an indoor space from images based on facial recognition (a predetermined condition related to a degree of match between the person and a feature in a face))
of a visitor registered in advance; […] (Okada ¶ 0048 lines 1-3 “The visitor DB 443 stores visitor information about the visitor who visits the showroom 1. The visitor information includes a face image of the visitor,” and ¶ 0049 lines 1-2 “In addition, the visitor DB 443 may store a visiting flag indicating a visitor who has visited the showroom 1” teaching a database of visitor images, analogous to registering visitor information in advance in that the database includes prior images of known visitors)
Matsumura, Nordbruch, and Okada are all considered analogous because they all relate to remote vehicle control, Okada specifically relating to remote robot control in proximity to people. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura with the simple substitution of the user for Okada’s visitor identified from camera images based on facial identification from a visitor information database including facial images. This modification would be made with a reasonable expectation of success as motivated by more appropriately serving visitors (Okada ¶ 0004), which in the context of the safety system of Matsumura would be obvious to one of ordinary skill in the art to include providing safety and warning to identified visitors who may not be familiar with a space.
Regarding Claim 2, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 1 as described above. Matsumura further teaches:
wherein to identify the moving object as being controlled by the remote control and likely to approach the target person comprises at least one of identifying the moving object moving in a building where the target person is present, identifying that the moving object is approaching the target person, (Matsumura Pg 7 ¶ 3 lines 1-4 “The dangerous state determination means 30 calculates a relative position and a relative speed based on both the user position detected by the user position detection means 10 and the positions detected by the object position detection means 20, and based on this, the user It is determined whether or not the surrounding object is in a dangerous state where it can collide,” teaching the likelihood of a detected object colliding with the user, as combined previously with the vehicle of Nordbruch)
or identifying that the moving object is at a distance equal to or less than a predetermined distance from the target person. (Matsumura Pg 7 ¶ 4 lines 1-3 “To explain individually, when an object around the user is moving, by calculating its moving speed and direction, it is calculated whether or not it will reach the user within a certain distance in a certain time. You may make it determine with it being in a dangerous state,”)
Regarding Claim 3, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 1 as described above. Matsumura further teaches:
wherein the control signal comprises instructions for reducing a moving speed of the moving object. (Matsumura Pg 3 ¶ 12 line 1 “The change of the content of the remote vehicle control includes moving or stopping the vehicle,” and Pg 9 ¶ 4 “For example, when the degree of danger is small, the moving speed of the vehicle may be slowed down to give the user time margin to assist in avoiding the danger without stopping. In this case, the remote vehicle control content may be changed by controlling the engine ECU 71 and the brake ECU 72,” teaching control response of speed reduction)
Regarding Claim 4, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 1 as described above. Matsumura further teaches:
wherein the control signal comprises instructions for increasing a quantity of light emitted from the moving object or for increasing a volume of sound emitted from the moving object. (Matsumura Pg 8 ¶ 5 lines 3-4 “For example, if the degree of danger is high, the degree of danger is expressed by increasing the volume accordingly, increasing the luminous intensity,”)
Regarding Claim 5, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 1 as described above. Matsumura further teaches:
the moving object, (Matsumura Pg 4 ¶ 7 lines 1-2 “FIG. 1 is a functional block diagram of a vehicle with a user protection function outside a vehicle according to the present embodiment,”)
wherein the one or more processors of the controller are configured to transmit instructions that move the moving object for implementing remote control. (Matsumura Pg 4 ¶ 8 “In addition, when it is configured so that a user outside the vehicle can remotely control the vehicle, the vehicle is provided with a destination instruction receiving unit 60 and a remote vehicle control unit 70, and the user remotely controls the vehicle with the user protection function outside the vehicle. In order to operate, a destination instruction transmission means 80 possessed by the user outside the vehicle is provided,”)
Regarding Claim 6, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 5 as described above. Matsumura further teaches:
the first sensor, (Matsumura Pg 6 ¶ 7 “The user position detection means 10 is for detecting the position of the user outside the vehicle. In order to detect the position of the user, for example, the radar can be detected using millimeter wave radar, ultrasonic clearance sonar, camera image recognition, or the like. If a radar or clearance sonar is used, the distance between the user and the vehicle can be detected, so that the position can be detected. Further, if a user is imaged by a camera and pattern matching or the like is performed, the presence of the user can be recognized, and the position can be calculated in consideration of the camera angle and the like. In FIG. 1, the camera 11 is illustrated as an example, but this can be replaced with an objective position detection sensor such as a radar or a clearance sonar, and a plurality of these may be used in combination. For example, if the presence of the user is clearly recognized by image recognition of the camera and the accurate distance to the user is detected by the radar, the position of the user can be detected more accurately,” teaching the use of navigation sensors including cameras and “clearance sonar” for detecting the target person)
wherein the one or more processors of the controller are configured to determine a location of the moving object using the first sensor, (Matsumura Pg 7 ¶ 2 lines 1-4 “The object position detection means 20 may use radar, clearance sonar, image recognition by the camera 11 or the like, as with the user position detection means 10. Note that these sensors used as the object position detecting means 20 may be provided separately from the user position detecting means 10 or may be used together when they can be used together,”)
wherein the one or more processors of the controller are configured to detect the target person using the first sensor. (Matsumura Pg 6 ¶ 9 lines 1-2 “The user position detection means 10 may be any means other than the direct detection means such as the above-mentioned camera,”)
Regarding Claim 7, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 6 as described above. Matsumura further teaches:
wherein the first sensor comprises a camera that captures an image used for determining the location of the moving object. (Matsumura Pg 6 ¶ 7 lines 1-7 “The user position detection means 10 is for detecting the position of the user outside the vehicle. In order to detect the position of the user, for example […] camera image recognition, or the like […] Further, if a user is imaged by a camera and pattern matching or the like is performed, the presence of the user can be recognized, and the position can be calculated in consideration of the camera angle and the like. In FIG. 1, the camera 11 is illustrated as an example,”)
Regarding Claim 9, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 6 as described above. Matsumura further teaches:
The system according to claim 6, wherein the one or more processors of the controller are configured to determine the whereabouts on the basis of the detection result of the target person and location information about the first sensor. (Matsumura Pg 7 ¶ 3 lines 1-4 “The dangerous state determination means 30 calculates a relative position and a relative speed based on both the user position detected by the user position detection means 10 and the positions detected by the object position detection means 20, and based on this, the user It is determined whether or not the surrounding object is in a dangerous state where it can collide,” relative location indicating that the sensing position is taken into consideration)
Regarding Claim 10, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 5 as described above. Matsumura further teaches:
wherein the moving object is a vehicle, […] (Matsumura Pg 6 ¶ 11 line 4 – Pg 7 ¶ 1 line 2 “as the objects around the user, both a stationary object such as the wall 300 and the parked vehicle 301 and a moving object such as a bicycle 302 and a motorcycle moving around the user can be considered,”)
Matsumura does not explicitly teach:
[…] the one or more processors of the controller are configured to cause the moving object to run between a first place and a second place in a factory for manufacture of the moving object
by implementing the remote control,
and a first step relating to manufacture of the moving object is performed at the first place and a second step as a step subsequent to the first step is performed at the second place.
Within the same field of endeavor as Matsumura, Nordbruch teaches:
[…] the one or more processors of the controller are configured to cause the moving object to run between a first place and a second place in a factory for manufacture of the moving object (Nordbruch ¶ 0024 lines 1-5 “According to another specific embodiment, it is provided that the manufacturing system includes an assembly line for vehicle manufacturing, the vehicle driving from one assembly station to another assembly station of the assembly line,”)
by implementing the remote control, (Nordbruch ¶ 0036 lines 9-12 “According to one specific embodiment, the manufacturing system is thus configured to remotely control the vehicle. This is accomplished via control instructions.,”)
and a first step relating to manufacture of the moving object is performed at the first place and a second step as a step subsequent to the first step is performed at the second place. (Nordbruch ¶ 0024 lines 3-5 “the vehicle driving from one assembly station to another assembly station of the assembly line,”)
Matsumura and Nordbruch are considered analogous because they both relate to remote vehicle control. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object position detection of a vehicle of Matsumura by adding it to the vehicle manufacturing environment between a first and second assembly station of Nordbruch. This modification would be made with a reasonable expectation of success as motivated by reducing costs for moving vehicles by removing the necessity for conveyor belts (Nordbruch ¶ 0024.)
Regarding Claim 14, Matsumura teaches:
A method of controlling a moving object implemented in a system that moves the moving object by remote control, comprising: a step of detecting a target person using a sensor […] (Matsumura Pg 6 ¶ 7 lines 4-6 “Further, if a user is imaged by a camera and pattern matching or the like is performed, the presence of the user can be recognized, and the position can be calculated in consideration of the camera angle and the like,”)
[…] as a result of detecting the target person, a step of determining whereabouts of the target person on the basis of a detection result of the target person; (Matsumura Pg 2 ¶ 6 “Object position detecting means for detecting positions of objects around the user based on the position of the user detected by the user position detecting means;” object position detecting being analogous to determination of whereabouts, and the user being the target person whose position is detected)
a step of identifying the moving object as being controlled by the remote control (Matsumura Pg 2 ¶ 6-7 “Object position detecting means for detecting positions of objects around the user based on the position of the user detected by the user position detecting means. Based on the relative positions and relative speeds calculated from the positions of the user and the objects around the user detected by the user position detection means and the object position detection means, there is a danger that the user and the surrounding objects may collide. A dangerous state determination means for determining whether or not a state is present;” and Pg 3 ¶ 10 lines 1-3 “The danger notification means changes the contents of the remote vehicle control when the dangerous state determination means determines that the user is in a dangerous state when the remote vehicle control means is performing the remote vehicle control,”)
[and likely to approach] the target person on the basis of the determined whereabouts; (Matsumura Pg 7 ¶ 3 lines 1-4 “The dangerous state determination means 30 calculates a relative position and a relative speed based on both the user position detected by the user position detection means 10 and the positions detected by the object position detection means 20, and based on this, the user It is determined whether or not the surrounding object is in a dangerous state where it can collide,” teaching the likelihood of a detected object (but not specifically the controlled vehicle) colliding with the user)
and a step of changing a driving mode of the moving object to an alert mode for giving an alert to the target person. (Matsumura Pg 8 ¶ 4 lines 1-3 “The danger notification unit 40 notifies the user of the danger when the danger state determination unit 30 determines that the user is in a dangerous state. As a notification method, sound output by sound output means 41 such as a horn and a speaker including a buzzer, light of a headlight 42,”)
Matsumura does not explicitly teach:
[…] provided at the system, wherein the target person being a person satisfying a predetermined detection condition for detecting a visitor to a factory, and the predetermined detection condition including at least one of a condition that the person wears an article associated with visitors or a condition related to a degree of match between the person and a feature in a face of a visitor registered in advance; […]
[…the moving object being controlled by the remote control] and likely to approach […]
Within the same field of endeavor as Matsumura, Nordbruch teaches:
[…] provided at the system; […] (Nordbruch ¶ 0027 lines 1-5 “According to another specific embodiment, it is provided that the driving operation of the vehicle is monitored and/or documented at least partially, in particular completely, with the aid of a vehicle-external monitoring system.,” analogous to the first detection unit,)
[…the moving object being controlled by the remote control] and likely to approach […] (Nordbruch ¶ 0040 “The data which are relevant for the autonomous driving operation are, for example, the following data, individually or in combination: map data from a digital map, position data about one or multiple stationary object(s) which is/are located within the manufacturing system, position data about one or multiple mobile object (s) which is/are located within the manufacturing system, prediction data with regard to a future movement of one or multiple of the mobile object(s) which is/are located within the manufacturing system, target position data for one or multiple target(s) which the vehicle is supposed to drive to autonomously. A target of this type is, for example, a position or a location of an assembly station, a test facility, or an end of the assembly line or a parking facility or a parking position in a parking facility,” emphasis added, teaching prediction of the route of the remote-controlled vehicle in addition to the sensed objects)
Matsumura and Nordbruch are considered analogous because they both relate to remote vehicle control. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object position detection unit of Matsumura by adding the exterior vehicle monitoring system of Nordbruch. This modification would be made with a reasonable expectation of success as motivated by adding the ability to retrace and error-check the operation (Nordbruch ¶ 0027) as well as eliminating the need for the vehicle to have a digital map of the manufacturing system (Nordbruch ¶ 0042.) It would also have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura with the simple substitution of the other object for Nordbruch’s remote vehicle target position data and prediction data of future movement to arrive at the combination of Matsumura’s collision prediction of the user with Nordbruch’s prediction of future remote controlled vehicle state to predict a collision between the user and the remote vehicle. This modification would be made with a reasonable expectation of success as motivated by protecting the user from the remote vehicle in case of distraction or error.
The combination of Matsumura and Nordbruch does not teach:
[…] wherein the target person being a person satisfying a predetermined detection condition for detecting a visitor to a factory, and the predetermined detection condition including at least one of a condition that the person wears an article associated with visitors or a condition related to a degree of match between the person and a feature in a face of a visitor registered in advance; […]
Within the same field of endeavor as Matsumura and Nordbruch, Okada teaches:
[…] wherein the target person being a person satisfying a predetermined detection condition for detecting a visitor to the factory, and the predetermined detection condition including at least one of a condition that the person wears an article associated with visitors or a condition related to a degree of match between the person and a feature in a face (Okada ¶ 0056 lines 1-10 “The visitor identifying unit 454 identifies a visitor who has visited the showroom 1 from the in-showroom image that has been acquired by the in-facility image acquisition unit 451. For example, the visitor identifying unit 454 extracts a person from the in-showroom image, and further extracts (recognizes) a face image from the person that has been extracted. Then, the visitor identifying unit 454 searches the visitor data stored in the visitor database 443 for visitor data having a face image that matches the face image that has been extracted, and identifies the visitor,” teaching identification of visitors in an indoor space from images based on facial recognition (a predetermined condition related to a degree of match between the person and a feature in a face))
of a visitor registered in advance; […] (Okada ¶ 0048 lines 1-3 “The visitor DB 443 stores visitor information about the visitor who visits the showroom 1. The visitor information includes a face image of the visitor,” and ¶ 0049 lines 1-2 “In addition, the visitor DB 443 may store a visiting flag indicating a visitor who has visited the showroom 1” teaching a database of visitor images, analogous to registering visitor information in advance in that the database includes prior images of known visitors)
Matsumura, Nordbruch, and Okada are all considered analogous because they all relate to remote vehicle control, Okada specifically relating to remote robot control in proximity to people. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura with the simple substitution of the user for Okada’s visitor identified from camera images based on facial identification from a visitor information database including facial images. This modification would be made with a reasonable expectation of success as motivated by more appropriately serving visitors (Okada ¶ 0004), which in the context of the safety system of Matsumura would be obvious to one of ordinary skill in the art to include providing safety and warning to identified visitors who may not be familiar with a space.
Regarding Claim 15, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 5 as described above. Matsumura further teaches:
wherein the […] sensor comprises a […] radar, an ultrasonic wave sensor […] (Matsumura Pg 5 ¶ 7 lines 1-2 “The object position detection unit 86 is provided as necessary, but can also serve as the object position detection unit 20 described later using a […] radar, clearance sonar, or the like,” teaching the use of navigation sensors including cameras and “clearance sonar” for detecting the target person)
Matsumura does not teach:
[…] second [sensor comprises] a light detection and ranging (LiDAR), a millimeter wave [radar, an ultrasonic wave sensor,] or an infrared sensor. […]
Within the same field of endeavor as Matsumura, Nordbruch teaches:
[…] wherein the second sensor comprises a light detection and ranging (LiDAR), a millimeter wave radar, an ultrasonic wave sensor, […] (Nordbruch ¶ 0035 “According to one specific embodiment, the vehicle includes surroundings sensor system for detecting the surroundings of a vehicle. The surroundings sensor system preferably includes one or multiple of the following surroundings sensor(s): radar sensor, LIDAR sensor, ultrasonic sensor,”)
Matsumura and Nordbruch are considered analogous because they both relate to remote vehicle control. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object position detection unit of Matsumura including radar or clearance sonar by adding the exterior vehicle monitoring system of Nordbruch including radar, LIDAR, or ultrasonic sensors. This modification would be made with a reasonable expectation of success as motivated by adding the ability to retrace and error-check the operation (Nordbruch ¶ 0027) as well as eliminating the need for the vehicle to have a digital map of the manufacturing system (Nordbruch ¶ 0042), in the manner of combining prior art elements (Matsumura’s object detection and Nordbruch’s exterior vehicle monitoring system) according to known methods (integrating the sensors of both systems into pedestrian warning systems) to yield predictable results (pedestrian detection by the respective sensors) [MPEP 2143(I)(A)].
Regarding Claim 17, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 1 as described above. Matsumura further teaches:
wherein to identify the moving object as being controlled by the remote control and likely to approach the target person comprises […] identifying that the moving object is approaching the target person, (Matsumura Pg 7 ¶ 3 lines 1-4 “The dangerous state determination means 30 calculates a relative position and a relative speed based on both the user position detected by the user position detection means 10 and the positions detected by the object position detection means 20, and based on this, the user It is determined whether or not the surrounding object is in a dangerous state where it can collide,” teaching the likelihood of a detected object colliding with the user, as combined previously with the vehicle of Nordbruch)
and identifying that the moving object is at a distance equal to or less than a predetermined distance from the target person. (Matsumura Pg 7 ¶ 4 lines 1-3 “To explain individually, when an object around the user is moving, by calculating its moving speed and direction, it is calculated whether or not it will reach the user within a certain distance in a certain time. You may make it determine with it being in a dangerous state,”)
Matsumura does not teach:
[…] identifying the moving object moving in a building where the target person is present, […]
Within the same field of endeavor as Matsumura, Okada teaches:
[…] identifying the moving object moving in a building where the target person is present, […] (Okada ¶ 0021 lines 4-12 “As illustrated in FIG. 1, in the robot management system 100 using a server device (control server) 4 according to the present embodiment, a guide robot 3 capable of self-propelling is arranged in a showroom 1 of an automobile dealer where an exhibition vehicle 2 is exhibited. Then, a visitor who visits the showroom 1 is identified from in-showroom images captured by each of a plurality of imaging apparatuses 11 installed on a ceiling 10 of the showroom 1,” teaching identification or both robot and visitors in an indoor space (showroom) in order to determine proper robot control in relation to the visitor)
Matsumura, Nordbruch, and Okada are all considered analogous because they all relate to remote vehicle control, Okada specifically relating to remote robot control in proximity to people. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura based on whether there is a danger of collision and on a collision distance with the simple addition of Okada’s identification of a robot and a visitor within a showroom to determine proper interaction between the robot and the visitor. This modification would be made with a reasonable expectation of success as motivated by more appropriately serving visitors (Okada ¶ 0004), which in the context of the safety system of Matsumura would be obvious to one of ordinary skill in the art to include providing safety and warning to identified visitors who may not be familiar with a space.
Claim(s) 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Matsumura in view of Nordbruch and Okada and further in view of Datta (US 20210398409, hereinafter “Datta”).
Regarding Claim 8 the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 7 as described above. Matsumura further teaches:
wherein the one or more processors of the controller are configured to detect the target person on the basis of the captured image acquired by the camera, […] (Matsumura Pg 6 ¶ 7 lines 1-7 “The user position detection means 10 is for detecting the position of the user outside the vehicle. In order to detect the position of the user, for example […] camera image recognition, or the like […] Further, if a user is imaged by a camera and pattern matching or the like is performed, the presence of the user can be recognized, and the position can be calculated in consideration of the camera angle and the like. In FIG. 1, the camera 11 is illustrated as an example,”)
[…] the person satisfying the predetermined detection condition receives the captured image an input image. (Matsumura Pg 6 ¶ 7 lines 1-7 “when using image recognition, a collision is possible in consideration of the user's height and width as well as the user's moving direction, speed, and acceleration. It is preferable to determine the sex,” teaching multiple predetermined conditions used in image recognition including height, width, sex, moving direction, speed, and acceleration)
Matsumura does not teach:
[…] and a learning model having learned about whether […]
Within the same field of endeavor as Matsumura, Datta teaches:
[…] and a learning model having learned about whether […] (Datta ¶ 0037 “As illustrated in FIG. 3, the vehicle 220 is shown traveling in a hazardous direction towards the pedestrian 205. The device 215 detects the hazardous travel path of the vehicle 220 using the object classifier, vehicle classifier, and artificial intelligence and/or machine learning techniques to alert the pedestrian 205 via the device. The alert signal may be transmitted once the vehicle travels outside the designated lane 230, outside the lane marker 235, nears a sidewalk barrier 240, or another threshold which may be indicative of a potential hazard,” emphasis added, teaching machine learning use in vehicle-pedestrian hazard identification)
Matsumura and Datta are considered analogous because they both relate to vehicle hazard detection. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the identification of user conditions such as height, width, speed, direction, acceleration, and sex of Matsumura by adding the machine learning functionality to identify potential hazards. This modification would be made with a reasonable expectation of success as motivated by increasing the usefulness, applicability, and safety of the system by learning new potential hazards.
Regarding Claim 11, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 5 as described above. Matsumura does not teach:
wherein the moving object includes a second sensor that detects a situation around the moving object, and the control signal comprises instructions for increasing sensitivity of detection by the second sensor.
Within the same field of endeavor as Matsumura, Nordbruch teaches:
wherein the moving object includes a second sensor that detects a situation around the moving object, […] (Nordbruch ¶ 0027 lines 1-5 “According to another specific embodiment, it is provided that the driving operation of the vehicle is monitored and/or documented at least partially, in particular completely, with the aid of a vehicle-external monitoring system.,” analogous to the first detection unit, and ¶ 0035 “According to one specific embodiment, the vehicle includes surroundings sensor system for detecting the surroundings of a vehicle. The surroundings sensor system preferably includes one or multiple of the following surroundings sensor(s): radar sensor, LIDAR sensor, ultrasonic sensor, video sensor, and laser sensor,” analogous to the second detection unit)
Matsumura and Nordbruch are considered analogous because they both relate to remote vehicle control. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object position detection unit of Matsumura by adding the exterior vehicle monitoring system of Nordbruch. This modification would be made with a reasonable expectation of success as motivated by adding the ability to retrace and error-check the operation (Nordbruch ¶ 0027) as well as eliminating the need for the vehicle to have a digital map of the manufacturing system (Nordbruch ¶ 0042.)
The combination of Matsumura and Nordbruch does not teach:
[…] and the control signal comprises instructions for increasing sensitivity of detection by the second sensor.
Within the same field of endeavor as Matsumura and Nordbruch, Datta teaches:
[…] and the control signal comprises instructions for increasing sensitivity of detection by the second sensor. (Datta ¶ 0054 lines 2-9 “The server engine 900 may transmit information to the machine learning engine 702 for analysis and processing to output various alerts depending on the vehicle classification. A sensitivity module 910 may change the sensor sensitivity based on […] input from the machine learning engine 702. For example, the sensor sensitivity may be changed depending on the roadway the pedestrian is traveling along,” teaching a signal to adjust sensor sensitivity based on hazard identification)
Matsumura, Nordbruch, and Datta are all considered analogous because they all relate to vehicle hazard detection. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the object position detection unit of Matsumura and exterior vehicle monitoring system of Nordbruch by adding the signal to increase sensor sensitivity of Datta. This modification would be made with a reasonable expectation of success as motivated by increasing the pedestrian safety when contextually appropriate.
Claim(s) 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Matsumura in view of Nordbruch, Okada, and Datta and further in view of Ng (US 11858128, hereinafter “Ng”).
Regarding Claim 12, the combination of Matsumura, Nordbruch, Okada, and Datta teaches the elements of claim 11 as described above. Matsumura further teaches:
wherein the moving object further comprises: a moving object storage unit storing a program; and one or more moving object processors, the one or more moving object processors are configured to execute the program stored in the moving object storage unit to perform a changing process (Matsumura Pg 12 ¶ 4 “Moreover, the emergency vehicle control means 90 may be provided independently, may be provided in ECU,” as applied to the claim 1 combination of Matsumura’s collision prediction of the user with Nordbruch’s prediction of future remote controlled vehicle state to predict a collision between the user and the remote vehicle, teaching a means within the vehicle ECU of controlling the vehicle in an emergency.)
of changing a driving state of the moving object when the second sensor detects a target person, […] (Matsumura Pg 12 ¶ 2 lines 3-6 “It is also possible to perform control so as to avoid direct contact with the vehicle that the user rushes into. The vehicle traveling control at this time may be performed in cooperation with the engine ECU 71, the steering ECU 72, the brake ECU 73, and the like,” teaching avoidance driving control by the emergency vehicle control means, analogous to moving speed reduction)
Matsumura does not teach:
[…] and the changing process includes at least any of a process of reducing a moving speed of the moving object,
a process of increasing a quantity of light emitted from the moving object,
or a process of increasing a volume of sound emitted from the moving object.
Within the same field of endeavor as Matsumura, Ng teaches:
[…] and the changing process includes at least any of a process of reducing a moving speed of the moving object, […] (Ng Col 6 lines 31-39 “The stop condition may result from the autonomous navigation module 130 determining an expected collision of the robot 104 with an object, […] In one implementation, if any of the status signals 134 are indicative of a stop condition, the rapid braking system operates,” teaching a robot (analogous to an autonomous vehicle) reducing speed by braking in response to an expected collision)
Matsumura and Ng are considered analogous because they both relate to vehicle hazard mitigation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the emergency vehicle control means of Matsumura by adding the stop condition and rapid braking system operation in response to an expected collision of Ng. This modification would be made with a reasonable expectation of success as motivated by increasing the pedestrian safety.
Regarding Claim 13, the combination of Matsumura, Nordbruch, Okada, Datta, and Ng teaches the elements of claim 12 as described above. Matsumura does not teach:
wherein when a target person becomes undetected by the second sensor after implementation of the changing process, the one or more moving object processors are configured to perform a process of releasing a state where the driving state is changed by the changing process.
Within the same field of endeavor as Matsumura, Ng teaches:
wherein when a target person becomes undetected by the second sensor after implementation of the changing process, the one or more moving object processors are configured to perform a process of releasing a state where the driving state is changed by the changing process. (Ng Col 3 line 63 – Col 4 line 3 “When the stop condition is removed and a start or normal condition is obtained, the inputs to the multiple-input AND gate are again all “high”. As a result, the relay in the stop circuit is again energized, disconnecting the short between the motor's terminals. The braking circuit discontinues dissipation of power, and the motor cutoff circuit reconnects the motor to the power supply. The robot may now resume normal operation.,” teaching releasing of the changed state (rapid braking) after the stop condition (impending collision) is removed)
Matsumura and Ng are considered analogous because they both relate to vehicle hazard mitigation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the emergency vehicle control means of Matsumura by adding the removal of the rapid braking system operation in response to an end to the stop condition of Ng. This modification would be made with a reasonable expectation of success as motivated by increasing the efficiency of the control system.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Matsumura in view of Nordbruch and Okada and further in view of Harman et al (US 20210110700, hereinafter “Harman”) and Mizuno et al (US 20220198919, hereinafter Mizuno).
Regarding Claim 16, the combination of Matsumura, Nordbruch, and Okada teaches the elements of claim 1 as described above. Matsumura does not teach:
wherein the predetermined detection condition includes the condition that the person wears the article associated with visitors, the condition related to the degree of match between the person and the feature in the face of the visitor registered in advance, and a condition related to the person's health.
Within the same field of endeavor as Matsumura, Okada teaches:
wherein the predetermined detection condition includes the […] the condition related to the degree of match between the person and the feature in the face (Okada ¶ 0056 lines 1-10 “The visitor identifying unit 454 identifies a visitor who has visited the showroom 1 from the in-showroom image that has been acquired by the in-facility image acquisition unit 451. For example, the visitor identifying unit 454 extracts a person from the in-showroom image, and further extracts (recognizes) a face image from the person that has been extracted. Then, the visitor identifying unit 454 searches the visitor data stored in the visitor database 443 for visitor data having a face image that matches the face image that has been extracted, and identifies the visitor,” teaching identification of visitors in an indoor space from images based on facial recognition (a predetermined condition related to a degree of match between the person and a feature in a face))
of the visitor registered in advance, […] (Okada ¶ 0048 lines 1-3 “The visitor DB 443 stores visitor information about the visitor who visits the showroom 1. The visitor information includes a face image of the visitor,” and ¶ 0049 lines 1-2 “In addition, the visitor DB 443 may store a visiting flag indicating a visitor who has visited the showroom 1” teaching a database of visitor images, analogous to registering visitor information in advance in that the database includes prior images of known visitors)
Matsumura and Okada are considered analogous because they both relate to remote vehicle control, Okada specifically relating to remote robot control in proximity to people. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura with the simple substitution of the user for Okada’s visitor identified from camera images based on facial identification from a visitor information database including facial images. This modification would be made with a reasonable expectation of success as motivated by more appropriately serving visitors (Okada ¶ 0004), which in the context of the safety system of Matsumura would be obvious to one of ordinary skill in the art to include providing safety and warning to identified visitors who may not be familiar with a space.
The combination of Matsumura and Okada does not teach:
[…] condition that the person wears the article associated with visitors, […]
[…] and a condition related to the person's health. […]
Within the same field of endeavor as Matsumura and Okada, Harman teaches:
[…] condition that the person wears the article associated with visitors, […] (Harman ¶ 0058 lines 21-36 “The outcomes 72 can additionally or alternatively be grouped by worker type, based on analysis of the image area 54. […] In this case, analysis of the image area 54 can be used to identify visitors (e.g., based on […] detecting a “Visitor” badge in the image area 54,” teaching analyzing a video to identify visitors based on the condition of a visitor’s badge in order to group different outcomes, analogous to the conditional treatment of visitors of Okada)
Matsumura, Okada, and Harman are all considered analogous because they all relate to video tracking of people. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura and Okada’s visitor identified from camera images based on facial identification from a visitor information database including facial images with the addition of Harman’s conditional identification of visitors from an analysis of video identifying a visitor’s badge. This modification would be made with a reasonable expectation of success as motivated by combining prior art elements (identification of a visitor based on facial analysis and based on identification of a visitor’s badge) according to known methods (using multiple criteria to identify a status) to yield predictable results (identification of people who may be higher risk in an environment).
The combination of Matsumura, Okada, and Harman does not teach:
[…] and a condition related to the person's health. […]
Within the same field of endeavor as Matsumura, Okada, and Harman, Mizuno teaches:
[…] and a condition related to the person's health. […] (Mizuno ¶ 0067 lines 1-9 “A “healthy person” is a person who is able to walk at an adult speed (such as 3 kilometers per hour) and who has no physical injury, illness, or disability. The criteria for being a healthy person may be set freely, but in this example, people who are preferably assisted when walking, such as an elderly person or a pregnant woman, are not included among healthy people. Any appropriate method can be employed to acquire information on whether a pedestrian is a healthy person,” and ¶ 0068 lines 1-11 “Alternatively, the controller 21 may acquire information on whether the pedestrian is a healthy person by receiving an image captured by the camera that captures the surroundings of the road that the pedestrian is about to cross. In this case, the controller 21 can analyze the image and determine whether the pedestrian is a healthy person or not. Any appropriate image analysis method can be used for analyzing the image, such as machine learning. For example, people with crutches, people in wheelchairs, pregnant women, and the elderly are not determined to be healthy people,” and ¶ 0099 “The controller 21 does not display the crosswalk in a case in which the pedestrian is not a healthy person and there is a general vehicle about to enter the road. This can prevent pedestrians who require extra assistance, such as pedestrians who walk slowly, from attempting to cross the road in a case in which there is a vehicle about to enter the road. Therefore, the safety of pedestrians when crossing the road can be improved more reliably,” teaching analyzing a video to identify not healthy people for conditional system responses)
Matsumura, Okada, Harman, and Mizuno are all considered analogous because they all relate to video tracking of people, Mizuno specifically relating to video tracking of pedestrians to increase vehicle safety. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the collision prediction of a user and another tracked object of Matsumura and Okada’s visitor identified from camera images based on facial identification from a visitor information database including facial images and Harman’s identification of visitors from an analysis of video identifying a visitor’s badge with the simple addition of Mizuno’s identification of not healthy pedestrians. This modification would be made with a reasonable expectation of success as motivated by more reliably improving pedestrian safety to pedestrians who require extra assistance (Mizuno ¶ 0099) and furthermore by combining prior art elements (identification of a visitor based on facial analysis, based on identification of a visitor’s badge, and based on not being healthy and requiring extra assistance) according to known methods (using multiple criteria to identify a status) to yield predictable results (identification of people who may be higher risk in an environment).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZACHARY E GLADE whose telephone number is (703)756-1502. The examiner can normally be reached 4-5-9 7:30-16:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at (571) 270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZACHARY E. F. GLADE/Examiner, Art Unit 3664
/KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664