DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner’s Note
Examiner has cited particular paragraphs/columns and line numbers or figures in the references as applied to the claims below for convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations with the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to the Applicant’s definition which is not specifically set forth in the claims.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware of, in the specification.
Status of Claims
The list of claims 1-20 is pending. In the claim set filed 11/10/2025:
Claim(s) 1 and 18 is/are the independent claim(s) observed in the instant application.
Claim(s) 1, 10, 12, 18 and 20 has/have been indicated as amended.
Claim(s) 2-9, 11, 13-17 and 19 has/have been indicated as originally presented.
Claim(s) 14 has/have been indicated as newly added.
Response to Arguments
With respect to Applicant’s remarks filed on 11/10/2025; the Applicant's “Amendments and Remarks” have been fully considered. The Applicant’s remarks will be addressed in sequential order as they were presented.
With respect to the Drawings Objection(s), the Applicant’s “Amendments and Remarks” have been fully considered and are persuasive. Therefore, the Drawings Objection(s) has/have been withdrawn.
With respect to the objection(s) of claim(s) 12, the Applicant’s “Amendments and Remarks” have been fully considered and are persuasive. Therefore, the rejection(s) of claim(s) the objection(s) of claim(s) 12 has/have been withdrawn.
With respect to the rejection(s) of claim(s) 1-11 and 13-20 under 35 U.S.C. § 101, the Applicant’s “Amendments and Remarks” have been fully considered but are not persuasive.
The Applicant’s amendments as filed 11/10/2025 differ from the proposed amendments discussed during the Interview conducted 11/06/2025 (with Examiner interview summary filed 11/10/2025). During the interview, the Applicant proposed amending the claims to recite: “controlling a mobile platform to follow the monitoring target or the warning object;” however, the claims as filed 11/10/2025 instead recite: “controlling a mobile platform to monitor the monitoring target or the warning object.”
The known definition in the art of the term “monitor” is the following: “to watch, keep track of, or check usually for a special purpose” (https://www.merriam-webster.com/dictionary/monitor). Therefore, the cited claim limitation instead amounts to no more than a recitation of the words “apply it” rather than a positively recited control step of a mobile platform using the claimed invention. Mere instructions to apply an exception are not satisfactory to make a claim patent eligible as explained in MPEP § 2106.05(f).
Put another way, as currently claimed, the claim recites: “A monitoring method comprising”… “controlling a mobile platform to monitor the monitoring target or the warning object,” rather than for example, reciting particular control steps that the claimed invention performs in order to achieve to goal of monitoring a target or warning target, such as actuating various implements on the UAV to steer the UAV to follow a moving target, for example.
Therefore, the rejection(s) of claim(s) 1-11 and 13-20 under 35 U.S.C. § 101 has/have been maintained.
With respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. § 102(a)(1) and 35 U.S.C. § 103, the Applicant’s “Amendments and Remarks” have been fully considered and are persuasive. Therefore, the rejection(s) of claim(s) 1-20 under 35 U.S.C. § 102(a)(1) and 35 U.S.C. § 103 has/have been withdrawn.
Office Note: Due to applicant’s amendments, further claim rejections appear on the record as stated in the Final Office Action below.
Final Office Action
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-11 and 13-20 is/are rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim(s) 1 and 18 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) using “one or more processors” and “a data collection apparatus” to perform the following: identifying a monitoring target and a warning object; obtaining position information of the monitoring target and the warning object; determining a warning area; and generating warning information(wherein generating warning information comprises: “extracting motion information of the warning object based on the position information of the warning object, generating a predicted position of the warning object according to the motion information, and generating the warning information in response to the predicted position of the warning object and the warning area satisfying a predetermined condition”).
The limitations of identifying a monitoring target and a warning object; obtaining position information of the monitoring target and the warning object; determining a warning area; and generating warning information, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “one or more processors” and “a data collection apparatus” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “one or more processors” and “a data collection apparatus” language, in the context of this claim encompasses the user manually performing steps of using their eyes to observe a monitoring target and the warning object, drawing a warning area around the monitoring target, and measuring a distance between the warning area and a predicted future position of the warning object to determine whether or not to generate broadly warning information based on a predetermined condition being satisfied. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites two additional elements – “one or more processors” and “a data collection apparatus” to perform identifying a monitoring target and a warning object; obtaining position information of the monitoring target and the warning object; determining a warning area; and generating warning information(wherein generating warning information comprises: “extracting motion information of the warning object based on the position information of the warning object, generating a predicted position of the warning object according to the motion information, and generating the warning information in response to the predicted position of the warning object and the warning area satisfying a predetermined condition”). The “one or more processors” and “a data collection apparatus” in these steps are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of generating, transmitting, receiving data from a generic sensor and outputting data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “one or more processors” and “a data collection apparatus” to perform identifying a monitoring target and a warning object; obtaining position information of the monitoring target and the warning object; determining a warning area; and generating warning information(wherein generating warning information comprises: “extracting motion information of the warning object based on the position information of the warning object, generating a predicted position of the warning object according to the motion information, and generating the warning information in response to the predicted position of the warning object and the warning area satisfying a predetermined condition”) amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Examiner’s Note: The amended claim limitations to independent claims 1 and 18 reciting: “controlling a mobile platform to monitor the monitoring target or the warning object” are not satisfactory to integrate a judicial exception into a practical application. The known definition in the art of the term “monitor” is the following: “to watch, keep track of, or check usually for a special purpose” (https://www.merriam-webster.com/dictionary/monitor). Therefore, the cited claim limitation instead amounts to no more than a recitation of the words “apply it” rather than a positively recited control step of a mobile platform using the claimed invention. Mere instructions to apply an exception are not satisfactory to make a claim patent eligible as explained in MPEP § 2106.05(f).
Dependent claim(s) 2-11, 13-17, 19 and 20 when analyzed as a whole, is/are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional element(s), if any, in the dependent claim(s) is/are not sufficient to amount to significantly more than the judicial exception for the same reasons as with claim(s) 1 and 18.
Examiner’s Note: In order to overcome this rejection, the Office suggests further defining the limitations of the independent claim(s), for example linking the claimed subject matter to a non-generic device and controlling a vehicle or an apparatus in a specific way based on the data comparison performed or further showing that the claimed subject matter is an improvement to a technical field. Limitations such as these suggested above would further bring the claimed subject matter out of the realm of abstract idea and into the realm of a statutory category.
For example, in claim 12, the Applicant claims additional elements that result in claims, which are no longer directed towards an abstract idea, and therefore, claim 12 is not rejected under 35 U.S.C. 101. This is due to the recitation of: “wherein: the warning object is mobile, and the method further includes controlling the mobile platform to follow the warning object; and/or the monitoring target is mobile, and the method further includes controlling the mobile platform to follow the monitoring target.”
Amending independent claims 1 and 18 such that they positively recite “controlling a mobile platform” to follow the warning object and/or the monitoring target would similarly overcome the rejections presented above under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claim(s) 1, 2, 5, 6, 8, 10-14, 16 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over KADAKA (Japanese Patent Publication 2020017155A) in view of WANG et al. (Chinese Patent Publication 107818651 A1), referenced as Kadaka and Wang, respectively, moving forward.
With respect to claim 1, Kadaka discloses: “A monitoring method comprising: identifying a monitoring target and a warning object in space according to data collected by a data collection apparatus; obtaining position information of the monitoring target and the warning object; determining a warning area based on the position information of the monitoring target; generating warning information based on a position relationship between a position of the warning object and the warning area” [Kadaka; "The portable flight monitoring terminal 100 flies up into the sky above the person being monitored 1, and when it acquires an image of the person being monitored 1 with a camera 110, it recognizes the person being monitored 1 or the tracking mark he or she has attached as the object to be tracked. After that, images are acquired at predetermined time intervals, and if the monitored person 1 moves, this is followed. When dealing with a suspicious person, people photographed in the vicinity of the monitored person 1 are identified, and a suspicious person is judged based on frequency and conditions described later;" ¶: 0023;
"This checks whether the suspected suspicious person has entered an area (warning zone) determined by a distance L centered on the monitored person...This means that if a person's face is exposed, they will not be judged as suspicious even if they get closer than 3 meters, but if their face is hidden, they will be judged as suspicious if they get closer than 10 meters;" ¶: 0028;
"If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0024];
“and controlling a mobile platform to monitor the monitoring target or the warning object” [Kadaka; "Tracking flight control: The aircraft moves horizontally and vertically so that the monitored person or monitoring mark is in the center of the screen and at a size according to the desired height. If there is any deviation, fly it so that it is in the center. The size of the image is related to the altitude, so the size is compared and the drone flies up and down to reach the specified altitude;" Fig. 4; ¶: 0026; See also: ¶: 0033];
And while Kadaka discloses reacting to “If a suspected suspicious person captured in the vicinity of the monitored person in the image satisfies a predetermined condition” [Kadaka; ¶: 0008], Kadaka does not specifically state that the predetermined condition is based on a predicted future position of the suspected suspicious person.
Wang, which is in the same field of invention of systems/methods for performing monitoring of a target area, teaches: “the generating the warning information including extracting motion information of the warning object based on the position information of the warning object, generating a predicted position of the warning object according to the motion information, and generating the warning information in response to the predicted position of the warning object and the warning area satisfying a predetermined condition” [Wang; In at least the paragraphs and figures cited, Wang teaches issuing an alarm when it is determined that anyone has entered a "warning area," wherein this determination is based on: tracking a person in an image identified as "unauthorized intruder," using historical location information to predict the possible location of the "unauthorized intruder" in the current frame (image) to track the movements of the person. The above disclosed "unauthorized intruder" and "alarm" have been interpreted as patentably indistinct from the Applicant's broadly recited "warning object" and "warning information," respectively; ¶: 0075-0078].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding analyzing video surveillance information over a plurality of frames and using historic information of a target person to predict a position of the target person as taught by Wang with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to perform real-time accurate analysis of video surveillance images such that potential safety risks can be automatically alerted to reduce potential safety hazards and avoid disasters [Wang; ¶: 0078].
With respect to claim 2, Kadaka discloses: “further comprising: obtaining an orthographic image or a stereoscopic image of a target area where the monitoring target is located; and displaying the warning area in the orthographic image or the stereoscopic image” [Kadaka; In at least Fig. 2(a)-2(g), Kadaka discloses a plurality of orthographic images as they comprise 2d projections of a 3d objects from above such that they are projected on a flat plane].
With respect to claim 5, Kadaka discloses: “wherein: the data collection apparatus includes a camera apparatus; the collected data includes an image; and obtaining the position information of the monitoring target and the warning object includes: obtaining the position information of the monitoring target and the warning object when the monitoring target is in a center area of the image” [Kadaka; "This checks whether the suspected suspicious person has entered an area (warning zone) determined by a distance L centered on the monitored person...This means that if a person's face is exposed, they will not be judged as suspicious even if they get closer than 3 meters, but if their face is hidden, they will be judged as suspicious if they get closer than 10 meters;" ¶: 0028;
"If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024].
With respect to claim 6, Kadaka discloses: “wherein: the position information of the monitoring target includes a determined position of the monitoring target, and the warning area is determined according to the determined position and a predetermined area model; and/or the position information of the monitoring target includes an edge position of the monitoring target, and the warning area is determined according to the edge position and a predetermined buffer distance” [Kadaka; "This checks whether the suspected suspicious person has entered an area (warning zone) determined by a distance L centered on the monitored person...This means that if a person's face is exposed, they will not be judged as suspicious even if they get closer than 3 meters, but if their face is hidden, they will be judged as suspicious if they get closer than 10 meters;" ¶: 0028;
"If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024].
With respect to claim 8, Kadaka discloses: “further comprising: obtaining type information of the monitoring target; and determining the warning area according to the position information and the type information of the monitoring target” [Kadaka; "This checks whether the suspected suspicious person has entered an area (warning zone) determined by a distance L centered on the monitored person...This means that if a person's face is exposed, they will not be judged as suspicious even if they get closer than 3 meters, but if their face is hidden, they will be judged as suspicious if they get closer than 10 meters;" ¶: 0028;
"If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024].
With respect to claim 10, Kadaka discloses: “wherein generating the warning information based on the position relationship includes: generating the warning information in response to the warning object being in the warning area or generating the warning information in response to a distance between the position of the warning object and an edge of the warning area being smaller than a predetermined distance threshold” [Kadaka; "If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028].
With respect to claim 11, Kadaka discloses: “The method according to claim 1, further comprising: sending the position information of the monitoring target to a mobile device to cause the mobile device to perform a target task according to the position information, the target task including capturing an image of the monitoring target and/or issuing audio information to the warning object” [Kadaka; "If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028].
With respect to claim 12, Kadaka discloses: “wherein: the warning object is mobile, and the method further includes controlling the mobile platform to follow the warning object; and/or the monitoring target is mobile, and the method further includes controlling the mobile platform to follow the monitoring target” [Kadaka; "The portable flight monitoring terminal 100 flies up into the sky above the person being monitored 1, and when it acquires an image of the person being monitored 1 with a camera 110, it recognizes the person being monitored 1 or the tracking mark he or she has attached as the object to be tracked. After that, images are acquired at predetermined time intervals, and if the monitored person 1 moves, this is followed. When dealing with a suspicious person, people photographed in the vicinity of the monitored person 1 are identified, and a suspicious person is judged based on frequency and conditions described later. It is also possible to
follow suspicious-looking individuals. In response to a distress, the portable flight monitoring terminal 100 transmits the ID and location information of the monitored person 1 to the monitoring base station 200 for rescue;" ¶: 0023; See also: Fig. 1-3; ¶: 0019, 0020, 0024, 0028, 0029].
With respect to claim 13, Kadaka discloses:
“wherein: the data collection apparatus includes a camera apparatus; the collected data includes an image collected by the camera apparatus” [Kadaka; "The invention of claim 1 is a portable flight surveillance terminal comprising a camera, an image determination means for processing images from the camera...the image determination means using the camera to capture an image of the person being monitored or a following mark worn by the person being monitored, recognizing that it is to be followed, and based on the position of the person being monitored (or possibly a suspected suspicious person) or the following mark in the image and the deviation of the size, if necessary, from a predetermined value, causes the drone to fly in a following manner via the flight control means;" Fig. 1; ¶: 0008];
“and the position information of the monitoring target and the warning object is determined based on a pose of the camera apparatus when collecting the image” [Kadaka; "The square frame represents the screen from which the image was taken. The upper right shows a case where the portable flight monitoring terminal 100 is positioned above the person being monitored, with the image in the center. The circle indicates the monitored person 1 or his/her mark, and the face mark indicates a suspicious person candidate. In (a), the drone detects that the circle is moving to the left as the image is taken, and controls its
flight so that the circle is in the center of the screen frame, resulting in (b). In this way, tracking control using position information on the image plane is extremely direct, fast-responding, and effective. Using GPS position information for tracking control does not provide sufficient tracking due to communication delays and errors in the position information. The height of the localized state can be measured using an altimeter or by means of sound wave reflection, but if the size is known, it can be seen from the size of the image, so means such as an altimeter are not necessarily required. In (c), if a suspicious person is about to leave the image frame, the image frame can be widened if observation is desired to continue. For example, the altitude of the portable flight monitoring terminal 100 can be increased or the magnification of the camera 110 can be changed, but since it is necessary to simplify the configuration, the former is more convenient. (d) shows the result of widening the image frame. (e) shows a case where a suspected suspicious person comes within a predetermined distance from the monitored person 1. Depending on these conditions, a suspicious person is judged, an image is sent to the monitoring base station 200 and a report is sent, nearby people are notified, and a display is made on the risk display means 150" ¶: 0024; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0028, 0029].
With respect to claim 14, Kadaka discloses:
“wherein obtaining the position information of the monitoring target includes: obtaining pixel position information of the monitoring target in the image” [Kadaka; "The monitored person 1 or the monitoring mark attached thereto in the acquired image is image-recognized, and thereafter, it is tracked at predetermined time intervals so as to come to the center of the image. 7) Check whether the person being monitored or the monitoring mark has been recognized. If it is not recognized, the image is acquired via 15) and checked again;" Fig. 4; ¶: 0026; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028, 0029];
“obtaining pose information of the camera apparatus; and calculating the position information of the monitoring target according to the pixel position information and the pose information” [Kadaka; "Once confirmed, tracking becomes possible. 8) Detect the deviation and size of the monitored person or monitoring mark from the center of the camera screen. 9) Tracking flight control: The aircraft moves horizontally and vertically so that the monitored person or monitoring mark is in the center of the screen and at a size according to the desired height. If there is any deviation, fly it so that it is in the center. The size of the image is related to the altitude, so the size is compared and the drone flies up and down to reach the specified altitude;" Fig. 4; ¶: 0026; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028, 0029].
With respect to claim 16, Kadaka discloses:
“wherein performing the correction on the position information of the monitoring target includes: recognizing a measurement point in the image and obtaining pixel position information of the measurement point” [Kadaka; "The monitored person 1 or the monitoring mark attached thereto in the acquired image is image-recognized, and thereafter, it is tracked at predetermined time intervals so as to come to the center of the image. 7) Check whether the person being monitored or the monitoring mark has been recognized. If it is not recognized, the image is acquired via 15) and checked again;" Fig. 4; ¶: 0026; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028, 0029];
“obtaining the pose information of the camera apparatus; calculating position information of the measurement point according to the pixel position information and the pose information” [Kadaka; "Once confirmed, tracking becomes possible. 8) Detect the deviation and size of the monitored person or monitoring mark from the center of the camera screen. 9) Tracking flight control: The aircraft moves horizontally and vertically so that the monitored person or monitoring mark is in the center of the screen and at a size according to the desired height. If there is any deviation, fly it so that it is in the center. The size of the image is related to the altitude, so the size is compared and the drone flies up and down to reach the specified altitude;" Fig. 4; ¶: 0026; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028, 0029];
“and determining error information based on the position information of the measurement point and actual position information of the measurement point” [Kadaka; "a camera equipped in the portable flight surveillance terminal is used to capture an image including the person being monitored or a tracking marker carried by the person being monitored, a deviation from a predetermined value in the position and, if necessary, size of the person being monitored or the tracking marker in the image is detected, and the drone equipped with the camera is caused to fly in a tracking manner to eliminate the deviation;" ¶: 0015; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028, 0029].
With respect to claim 18, Kadaka discloses:
“A monitoring apparatus comprising: one or more processors; and one or more memories storing one or more executable instructions that, when executed by the one or more processors, cause the one or more processors to: identify a monitoring target and a warning object in space according to data collected by a data collection apparatus; obtain position information of the monitoring target and the warning object; determine a warning area based on the position information of the monitoring target; and generate warning information based on a position relationship between a position of the warning object and the warning area” [Kadaka; "The portable flight monitoring terminal 100 flies up into the sky above the person being monitored 1, and when it acquires an image of the person being monitored 1 with a camera 110, it recognizes the person being monitored 1 or the tracking mark he or she has attached as the object to be tracked. After that, images are acquired at predetermined time intervals, and if the monitored person 1 moves, this is followed. When dealing with a suspicious person, people photographed in the vicinity of the monitored person 1 are identified, and a suspicious person is judged based on frequency and conditions described later;" ¶: 0023;
"This checks whether the suspected suspicious person has entered an area (warning zone) determined by a distance L centered on the monitored person...This means that if a person's face is exposed, they will not be judged as suspicious even if they get closer than 3 meters, but if their face is hidden, they will be judged as suspicious if they get closer than 10 meters;" ¶: 0028;
"If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0024];
“and control a mobile platform to monitor the monitoring target or the warning object” [Kadaka; "Tracking flight control: The aircraft moves horizontally and vertically so that the monitored person or monitoring mark is in the center of the screen and at a size according to the desired height. If there is any deviation, fly it so that it is in the center. The size of the image is related to the altitude, so the size is compared and the drone flies up and down to reach the specified altitude;" Fig. 4; ¶: 0026; See also: ¶: 0033];
And while Kadaka discloses reacting to “If a suspected suspicious person captured in the vicinity of the monitored person in the image satisfies a predetermined condition” [Kadaka; ¶: 0008], Kadaka does not specifically state that the predetermined condition is based on a predicted future position of the suspected suspicious person.
Wang teaches: “the generating the warning information including extracting motion information of the warning object based on the position information of the warning object, generating a predicted position of the warning object according to the motion information, and generating the warning information in response to the predicted position of the warning object and the warning area satisfying a predetermined condition” [Wang; In at least the paragraphs and figures cited, Wang teaches issuing an alarm when it is determined that anyone has entered a "warning area," wherein this determination is based on: tracking a person in an image identified as "unauthorized intruder," using historical location information to predict the possible location of the "unauthorized intruder" in the current frame (image) to track the movements of the person. The above disclosed "unauthorized intruder" and "alarm" have been interpreted as patentably indistinct from the Applicant's broadly recited "warning object" and "warning information," respectively; ¶: 0075-0078].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding analyzing video surveillance information over a plurality of frames and using historic information of a target person to predict a position of the target person as taught by Wang with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to perform real-time accurate analysis of video surveillance images such that potential safety risks can be automatically alerted to reduce potential safety hazards and avoid disasters [Wang; ¶: 0078].
With respect to claim 19, Kadaka discloses:
“wherein: the data collection apparatus includes a camera apparatus; the collected data includes an image” [Kadaka; "The invention of claim 1 is a portable flight surveillance terminal comprising a camera, an image determination means for processing images from the camera...the image determination means using the camera to capture an image of the person being monitored or a following mark worn by the person being monitored, recognizing that it is to be followed, and based on the position of the person being monitored (or possibly a suspected suspicious person) or the following mark in the image and the deviation of the size, if necessary, from a predetermined value, causes the drone to fly in a following manner via the flight control means;" Fig. 1; ¶: 0008];
“and the position information of the monitoring target and the warning object is determined based on a pose of the camera apparatus when collecting the image” [Kadaka; "The square frame represents the screen from which the image was taken. The upper right shows a case where the portable flight monitoring terminal 100 is positioned above the person being monitored, with the image in the center. The circle indicates the monitored person 1 or his/her mark, and the face mark indicates a suspicious person candidate. In (a), the drone detects that the circle is moving to the left as the image is taken, and controls its
flight so that the circle is in the center of the screen frame, resulting in (b). In this way, tracking control using position information on the image plane is extremely direct, fast-responding, and effective. Using GPS position information for tracking control does not provide sufficient tracking due to communication delays and errors in the position information. The height of the localized state can be measured using an altimeter or by means of sound wave reflection, but if the size is known, it can be seen from the size of the image, so means such as an altimeter are not necessarily required. In (c), if a suspicious person is about to leave the image frame, the image frame can be widened if observation is desired to continue. For example, the altitude of the portable flight monitoring terminal 100 can be increased or the magnification of the camera 110 can be changed, but since it is necessary to simplify the configuration, the former is more convenient. (d) shows the result of widening the image frame. (e) shows a case where a suspected suspicious person comes within a predetermined distance from the monitored person 1. Depending on these conditions, a suspicious person is judged, an image is sent to the monitoring base station 200 and a report is sent, nearby people are notified, and a display is made on the risk display means 150" ¶: 0024; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0028, 0029].
With respect to claim 20, Kadaka discloses: “wherein the one or more processors are further configured to: generate the warning information in response to the warning object being in the warning area or generate the warning information in response to a distance between a position of the warning object and a position of an edge of the warning area being smaller than a predetermined distance threshold” [Kadaka; "If the object enters the alert area of L or is detected a predetermined number of times or for a predetermined period of time or more in 13), 16) ID/location information acquisition is performed. The identity of the monitored person 1 (the correspondence between ID and individual is assumed to be established in advance through a membership system or registration) and the location can be determined...In the alarm/warning/transmission mode, the ID/location information and an image of the suspicious person are transmitted to the monitoring base station 200, and are also reported to people in the vicinity, and are displayed on the risk display means 150 with sound and light. In some cases, it is also possible to instruct the monitored person 1 to change his/her walking style and move towards people (for safety);" ¶: 0029; See also: Fig. 1-3; ¶: 0019, 0020, 0023, 0024, 0028].
Claim(s) 3 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kadaka in view of Wang and PENG et al. (Chinese Patent Publication 109117749 A), referenced as Peng moving forward.
With respect to claim 3, Kadaka does not specifically state: “wherein the orthographic image is an image obtained by synthesizing the data collected by the data collection apparatus.”
Peng, which is in the same field of invention of systems/methods for performing monitoring with UAV’s, teaches: “wherein the orthographic image is an image obtained by synthesizing the data collected by the data collection apparatus” [Peng; "In step 1.3, the drone's status information, the camera's internal parameters, and the captured images are sent back to the ground station as a basis for image stitching and intelligent image recognition. The drone's status information includes flight attitude and altitude information and flight geographic coordinates; the camera's internal parameters include focal length and aperture information;" ¶: 0021; See also: ¶: 0011-0017].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding using the internal parameters of a camera mounted on a drone to perform image stitching for intelligent image recognition as taught by Peng with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to perform matching with improved calculation speed by using SIFT feature extraction [Peng; ¶: 0076].
With respect to claim 4, Kadaka does not specifically state: “The method according to claim 2, further comprising: obtaining a three-dimensional model of the target area, the three-dimensional model being created through the data collected by the data collection apparatus; wherein obtaining the orthographic image includes obtaining the orthographic image through the three-dimensional model.”
Peng teaches:
“The method according to claim 2, further comprising: obtaining a three-dimensional model of the target area, the three-dimensional model being created through the data collected by the data collection apparatus” [Peng; "In step 2.2, based on the feature point pairs between the established two-dimensional images, the pose information of each image and the three-dimensional coordinates of the observation points are obtained through nonlinear optimization using the structure from motion (SfM) method to obtain a sparse point cloud of the UAV aerial photography scene;" ¶: 0024; See also: ¶: 0022, 0023 and 0025-0028];
“wherein obtaining the orthographic image includes obtaining the orthographic image through the three-dimensional model” [Peng; "Step 2.5: Based on the constructed mesh model, texture mapping is implemented through the previously established relationship between the image and the triangle patch to obtain a texture mesh model of the real scene;" ¶: 0027;
"Step 2.6, generate an orthographic projection with geographic coordinates and an elevation map according to the projection direction;" ¶: 0028; See also: ¶: 0022-0026].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding using the internal parameters of a camera mounted on a drone to perform image stitching for intelligent image recognition as taught by Peng with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to perform matching with improved calculation speed by using SIFT feature extraction [Peng; ¶: 0076].
Claim(s) 7, 15 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kadaka in view of Wang and Kudrynski et al. (United States Patent Publication 2018/0209796 A1), referenced as Kudrynski moving forward.
With respect to claim 7, Kadaka does not specifically state: “wherein the edge position is determined through a feature point at an outer surface of the monitoring target.”
Kudrynski, which is in the same field of invention of systems/methods for controlling vehicles, teaches: “wherein the edge position is determined through a feature point at an outer surface of the monitoring target” [Kudrynski; "In other words, the location reference data comprises at least one depth map, e.g. raster image, indicative of the environment around the vehicle, wherein each pixel of the at least one depth map is associated with a position in the reference plane, and each pixel includes a channel representing the lateral distance, e.g. normal to the reference plane, to a surface of an object in the environment. In such embodiments, the relevant depth map, e.g. raster image, is processed using an edge detection algorithm to generate the outline of the objects in the environment;" ¶: 0094; See also: ¶: 0095, 0098].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding correcting lateral and longitudinal position information based on matching of parameters with a previously stored model as taught by Kudrynski with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to know its position relative to a digital maps at all times with a high degree of accuracy [Kudrynski; ¶: 0086].
With respect to claim 15, while Kadaka discloses: “The method according to claim 14, wherein: the position information of the monitoring target includes horizontal position information and height information” [Kadaka; "The square frame represents the screen from which the image was taken. The upper right shows a case where the portable flight monitoring terminal 100 is positioned above the person being monitored, with the image in the center. The circle indicates the monitored person 1 or his/her mark, and the face mark indicates a suspicious person candidate. In (a), the drone detects that the circle is moving to the left as the image is taken, and controls its flight so that the circle is in the center of the screen frame, resulting in (b). In this way, tracking control using position information on the image plane is extremely direct, fast-responding, and effective. Using GPS position information for tracking control does not provide sufficient tracking due to communication delays and errors in the position information. The height of the localized state can be measured using an altimeter or by means of sound wave reflection;" Fig. 2(a)-2(e); ¶: 0024],
Kadaka does not specifically state: “and obtaining the position information further includes: looking up a correction value of the height information using a predetermined terrain model according to the horizontal position information; and updating the horizontal position information using the correction value.”
Kudrynski teaches: “and obtaining the position information further includes: looking up a correction value of the height information using a predetermined terrain model according to the horizontal position information; and updating the horizontal position information using the correction value” [Kudrynski; In at least the paragraphs and figures cited, Kudrynski discloses using a plurality of vehicle sensors to collect environment data to generate localization reference data(patentably indistinct from the Applicant's broadly defined "terrain model") that is subsequently stored in a database; calculating a correlation between the localizations reference data and the real time scan data to determine an alignment offset(patentably indistinct from the Applicant's broadly defined "correction value") between the depth maps; and using the determined alignment offset to adjust the deemed current position to determine the position of the vehicle relative to the digital map; ¶: 0065-0069;
Kudrynski further discloses in ¶: 0272: "As can be seen from FIGS. 9, 10B, 11 and 12, the localization reference data and the sensed environment data preferably are in the form of depth maps, wherein each element (e.g. pixel when the depth map is stored as an image) comprises: a first value indicative of a longitudinal position (along a road); a second value indicative of an elevation (i.e. a height above ground); and a third value indicative of a lateral position (across a road). Each element, e.g. pixel, of the depth map therefore effectively corresponds to a portion of a surface of the environment around the vehicle;" See also: Fig. 8-11; ¶: 0258-0267].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding correcting lateral and longitudinal position information based on matching of parameters with a previously stored model as taught by Kudrynski with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to know its position relative to a digital maps at all times with a high degree of accuracy [Kudrynski; ¶: 0086].
With respect to claim 17, Kadaka does not specifically state: “wherein: the measurement point is a road sign with known actual position information; or determining the actual position information of the measurement point includes at least one of: determining the actual position information of the measurement point based on point cloud information obtained by a laser radar carried by the mobile platform for the measurement point; or calculating the actual position information of the measurement point based on a visual algorithm.”
Kudrynski teaches: “wherein: the measurement point is a road sign with known actual position information; or determining the actual position information of the measurement point includes at least one of: determining the actual position information of the measurement point based on point cloud information obtained by a laser radar carried by the mobile platform for the measurement point; or calculating the actual position information of the measurement point based on a visual algorithm” [Kudrynski; "In accordance with some further aspects and embodiments of the invention the method comprises using the localization reference data to determine a reference point cloud indicative of the environment around the navigable element, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;" ¶: 0136].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding correcting lateral and longitudinal position information based on matching of parameters with a previously stored model as taught by Kudrynski with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to know its position relative to a digital maps at all times with a high degree of accuracy [Kudrynski; ¶: 0086].
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kadaka in view of Wang and Lin et al. (United States Patent Publication 2015/0010213 A1), referenced as Lin moving forward.
With respect to claim 9, Kadaka does not specifically state: “wherein: the warning area includes a plurality of sub-areas with different warning levels; and the plurality of sub-areas with different warning levels correspond to warning information of different levels.”
Lin, which is in the same field of invention of systems/methods for object monitoring, teaches: “wherein: the warning area includes a plurality of sub-areas with different warning levels; and the plurality of sub-areas with different warning levels correspond to warning information of different levels” [Lin; "As discussed above, in the step S240, the image processing apparatus 120 calculates the distance D1 between the monitored object 330 and the reference target 310 and the distance D2 between the monitored object 330 and the reference target 320;" Fig. 2; ¶: 0043;
"Specifically, the image processing apparatus 120 may calculate the function. In the present embodiment, the function is a1D1 b1+a2D2 b2+k, wherein a1, a2, b1, b2, and k are real numbers and may be determined by the designer or the user of the image surveillance system 100 based on actual demands. For instance, the function may be set as a1D1+a2D2. The image processing apparatus 120 may then determine whether the function (i.e., a1D1 b1+a2D2 b2+k) is between the first threshold and the second threshold. If yes, the image processing apparatus 120 may announce a first warning (e.g., in form of sound at a normal volume). If not, the image processing apparatus 120 may continue to determine whether the function is smaller than the second threshold. If the function is smaller than the second threshold, the image processing apparatus 120 may announce a second warning (e.g., in form of sound at a large volume);" Fig. 2; ¶: 0044; See also: ¶: 0045].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system/method for controlling a UAV to perform portable monitoring as disclosed by Kadaka to incorporate the teachings regarding implementing a warning system based on a plurality of distance thresholds wherein the strength of the warning is dependent on the distance threshold violated as taught by Lin with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for controlling a UAV to perform portable monitoring that is more robust in its ability to monitor a relatively large area, while still being able to monitor the distance from the monitored object to each reference target and determine whether to announce said warnings according to the distance [Lin; ¶: 0054].
Prior Art (Not relied upon)
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the attached form 892.
FUJIMATSU et al. (United States Patent Publication 2015/0015718 A1) discloses: A tracking assistance device for assisting a monitoring person in tracking a moving object by displaying on a display device a monitoring screen in which display views for displaying in real time captured images taken by respective cameras are arranged on a map image representing a monitored area in accordance with an actual arrangement of the cameras, includes: a target-to-be-tracked setting unit that, in response to an input operation performed by the monitoring person on one of the display views to designate a moving object to be tracked, sets the designated moving object as a target to be tracked; a prediction unit that predicts a next display view in which the moving object set as the target to be tracked will appear next based on tracing information obtained by processing the captured images; and a display view indicating unit that indicates the next display view on the monitoring screen.
Ito (United States Patent Publication 2016/0084932 A1) discloses: An image processing apparatus includes a detection unit configured to detect movement information for specifying a movement direction of a specific moving object detected from an image obtained by at least one of a plurality of image capturing units, a prediction unit configured to predict a second image capturing unit configured to image-capture the specific moving object subsequently to a first image capturing unit based on the movement information detected by the detection unit and information representing an image capturing range of each of the plurality of image capturing units, and a display control unit configured to perform display for specifying a prediction result by the prediction unit before the second image capturing unit image-captures the specific moving object.
Seeber et al. (United States Patent Publication 2018/0129881 A1) discloses: Systems, methods, and apparatus for identifying and tracking UAVs including a plurality of sensors operatively connected over a network to a configuration of software and/or hardware. Generally, the plurality of sensors monitors a particular environment and transmits the sensor data to the configuration of software and/or hardware. The data from each individual sensor can be directed towards a process configured to best determine if a UAV is present or approaching the monitored environment. The system generally allows for a detected UAV to be tracked, which may allow for the system or a user of the system to predict how the UAV will continue to behave over time. The sensor information as well as the results generated from the systems and methods may be stored in one or more databases in order to improve the continued identifying and tracking of UAVs.
Miller et al. (United States Patent Publication 2021/0350162 A1) discloses: In some examples, a device may receive, from a first camera, a plurality of images of an airspace corresponding to an area of operation of an unmanned aerial vehicle (UAV). The device may detect, based on the plurality of images from the first camera, a candidate object approaching or within the airspace. Based on detecting the candidate object, the device may control a second camera to direct a field of view of the second camera toward the candidate object. Further, based on images from the second camera captured at a first location and images from at least one other camera captured at a second location, the candidate object may be determined to be an object of interest. In addition, at least one action may be taken based on determining that the candidate object is the object of interest.
YAMAZAKI et al. (United States Patent Publication 2023/0050235 A1) discloses: A surveillance apparatus includes a feature value storage apparatus that associates and stores a feature value of a person belonging to the same group, a detection unit that detects an approach of a person not belonging to the same group to the person belonging to the same group within a reference distance by processing a captured image by using the feature value, and an output unit that performs a predetermined output by using a detection result of the detection unit.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAMI N BEDEWI whose telephone number is (571)272-5753. The examiner can normally be reached Monday - Thursday - 6:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A. Browne can be reached on (571-270-0151). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.N.B./Examiner, Art Unit 3666C
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666