DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number EP23176361.6 dated 31 May 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 2 May 2024 has been considered and placed in the application file.
Claim Interpretation
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claims 1, 2, 4 and 5 recite “or.” Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“circuitry configure to” in claim 7;
“determining function configured to determine” in claim 7;
“a first calculating function configured to calculate” in claim 7;
“a second calculating function configured to calculate” in claim 7;
“a fourth calculating function configured to calculate” in claim 9;
“a setting function configured to set” in claim 9;
“a deciding function configured to” in claim 10;
“a third calculating function configured to calculate” in claim 11;
“a fourth calculating function configured to calculate” in claim 12; and
“a setting function configured to set” in claim 12.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Interpretation
Claim 10 recites “refrain from using re-identification in tracking the object”. The phrase “refrain from using” is considered to be a negative limitation because the word “refrain” is exclusionary in nature. According to MPEP § 2173.05(i) “Any negative limitation or exclusionary proviso must have basis in the original disclosure.” The specification defines this phrase in paragraph [0035]. As showing a negative is not reasonable, any prior art reference that does not explicitly show the recited limitation suffices to reject the limitation.
1st Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 4-5, 7-8 and 10-11 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2018 0211396 A1, (Roahstkhari Javan et al.).
Claim 1
[AltContent: textbox (Roahstkhari Javan et al. Fig. 2, showing object identification and tracking.)]
PNG
media_image1.png
480
640
media_image1.png
Greyscale
Regarding Claim 1, Roahstkhari Javan et al. teach a computer implemented method of detecting a change of ratio of occlusion of a tracked object in a video sequence ("The following relates to systems and methods for detecting, localizing and tracking an object of interest in videos, particularly in the field of computer vision," paragraph [0002]), the method comprising:
determining, for each of a plurality of image frames of the video sequence, a bounding box or a mask of the tracked object ("In general, single target tracking algorithms consider a bounding box around the object in the first frame and automatically track the trajectory of the object over the subsequent frames," paragraph [0004]);
calculating, for each pair of successive image frames, an intersection over union (IoU) of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames ("This is done by measuring the overlap ratio of a prediction bounding box with the ground truth one as the intersection over union, and applying different threshold values between O and 1," paragraph [0057]);
calculating, for a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further IoU of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames ("This can be done to ensure that the tracking system adaptively learns the appearance of the object in successive frames," paragraph [0007]); and
on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, detecting that a ratio of occlusion of the tracked object has changed ("In order to have a more detailed comparison, the success rate and precision scores are reported for different tracking attributes in FIG. 7. The visual attributes illustrated in FIG. 7 include illumination variation (IV), occlusion (OCC), scale variation (SV), deformation (DEF), motion blur (MB), fast target motion (FM), in-plane and out of plane rotations (IPR and OPR), out-of-view (OV), background clutter (BC), and low resolution videos (LR)," paragraph [0059]).
It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Roahstkhari Javan et al. explicitly motivates doing so at least in paragraphs [0027], [0069] and [0072] including “Although the above principles have been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in the art as outlined in the appended claims.” and otherwise motivating experimentation and optimization.
The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of method claim 4 and device claim 7 while noting that the rejection above cites to both device and method disclosures. Claims 4 and 7 are mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Roahstkhari Javan et al. teach the method according to claim 1, further comprising: calculating a mean or median of the IoUs of the plurality of pairs of successive image frames ("The algorithm starts with an initial set of means and variances estimated from the bounding boxes in the first frame," paragraph [0053]),
wherein the act of detecting comprises: on condition that the further IoU differs from the determined mean or median of the IoUs of the plurality of pairs of successive image frames by more than a threshold amount, detecting that a ratio of occlusion of the tracked object has changed ("This is done by measuring the overlap ratio of a prediction bounding box with the ground truth one as the intersection over union, and applying different threshold values between O and 1," paragraph [0057]).
Claim 4
Regarding claim 4, Roahstkhari Javan et al. teach a computer implemented method of deciding to refrain from using re-identification in tracking an object in image frames of a video sequence("The following relates to systems and methods for detecting, localizing and tracking an object of interest in videos, particularly in the field of computer vision," paragraph [0002]), the method comprising:
determining, for each of a plurality of image frames of the video sequence, a bounding box or a mask of the tracked object ("In general, single target tracking algorithms consider a bounding box around the object in the first frame and automatically track the trajectory of the object over the subsequent frames," paragraph [0004]);
calculating, for each pair of successive image frames, an intersection over union, IoU, of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames ("This is done by measuring the overlap ratio of a prediction bounding box with the ground truth one as the intersection over union, and applying different threshold values between O and 1," paragraph [0057]);
calculating, for a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames ("This can be done to ensure that the tracking system adaptively learns the appearance of the object in successive frames," paragraph [0007]); and
on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, deciding to refrain from using re-identification in tracking the object in the further pair of successive image frames ("In order to have a more detailed comparison, the success rate and precision scores are reported for different tracking attributes in FIG. 7. The visual attributes illustrated in FIG. 7 include illumination variation (IV), occlusion (OCC), scale variation (SV), deformation (DEF), motion blur (MB), fast target motion (FM), in-plane and out of plane rotations (IPR and OPR), out-of-view (OV), background clutter (BC), and low resolution videos (LR)," paragraph [0059]).
Claim 5
Regarding claim 5, Roahstkhari Javan et al. teach the method according to claim 4, further comprising:
calculating a mean or median of the IoUs of the plurality of pairs of successive image frames ("The algorithm starts with an initial set of means and variances estimated from the bounding boxes in the first frame," paragraph [0053]),
wherein the act of detecting comprises: on condition that the further IoU differs from the determined mean or median of the IoUs of the plurality of pairs of successive image frames by more than a threshold amount, deciding to refrain from using re-identification in tracking the object in the further pair of successive image frames ("This is done by measuring the overlap ratio of a prediction bounding box with the ground truth one as the intersection over union, and applying different threshold values between O and 1," paragraph [0057]).
Claim 7
Regarding claim 7, Roahstkhari Javan et al. teach a device comprising circuitry configure to ("The following relates to systems and methods for detecting, localizing and tracking an object of interest in videos, particularly in the field of computer vision," paragraph [0002]) execute:
A determining function configured to determine, for each of a plurality of image frames of a video sequence, a bounding box or a mask of the tracked object ("In general, single target tracking algorithms consider a bounding box around the object in the first frame and automatically track the trajectory of the object over the subsequent frames," paragraph [0004]);
a first calculating function configured to calculate, for each pair of successive image frames, an intersection over union, IoU, of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames ("This is done by measuring the overlap ratio of a prediction bounding box with the ground truth one as the intersection over union, and applying different threshold values between O and 1," paragraph [0057]);
a second calculating function configured to calculate, for a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames ("This can be done to ensure that the tracking system adaptively learns the appearance of the object in successive frames," paragraph [0007]); and
a detecting function configured to, on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, detect that a ratio of occlusion of the tracked object has changed ("In order to have a more detailed comparison, the success rate and precision scores are reported for different tracking attributes in FIG. 7. The visual attributes illustrated in FIG. 7 include illumination variation (IV), occlusion (OCC), scale variation (SV), deformation (DEF), motion blur (MB), fast target motion (FM), in-plane and out of plane rotations (IPR and OPR), out-of-view (OV), background clutter (BC), and low resolution videos (LR)," paragraph [0059]).
Claim 8
Regarding claim 8, Roahstkhari Javan et al. teach the device according to claim 7, wherein the circuitry is further configured to execute: a third calculating function configured to calculate a mean or median of the IoUs of the plurality of pairs of successive image frames("The algorithm starts with an initial set of means and variances estimated from the bounding boxes in the first frame," paragraph [0053]),
wherein the detecting function is configured to, on condition that the further IoU differs from the determined mean or median of the IoUs of the plurality of pairs of successive image frames by more than a threshold amount, detecting that a ratio of occlusion of the tracked object has changed ("This is done by measuring the overlap ratio of a prediction bounding box with the ground truth one as the intersection over union, and applying different threshold values between O and 1," paragraph [0057]).
Claim 10
Regarding claim 10, Roahstkhari Javan et al. teach the device according to claim 7, further including a deciding function configured to, on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, refrain from using re-identification in tracking the object in the further pair of successive image frames ("There is provided a unified deep network architecture for object tracking in which the probability distributions of the observations are learnt and the target is identified using a set of weak classifiers ( e.g. Bayesian classifiers) which are considered as one of the hidden layers," paragraph [0026]).
Claim 11
Regarding claim 11, Roahstkhari Javan et al. teach the device according to claim 10, further including:
a third calculating function configured to calculate a mean or median of the IoUs of the plurality of pairs of successive image frames ("The algorithm starts with an initial set of means and variances estimated from the bounding boxes in the first frame," paragraph [0053]),
wherein the detecting function is configured to, on condition that the further IoU differs from the determined mean or median of the IoUs of the plurality of pairs of successive image frames by more than a threshold amount, decide to refrain from using re-identification in tracking the object in the further pair of successive image frames ("This is done by measuring the overlap ratio of a prediction bounding box with the ground truth one as the intersection over union, and applying different threshold values between O and 1," paragraph [0057]).
Claim Rejections - 35 USC § 103
Claims 3, 6, 9 and 12 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2018 0211396 A1, (Roahstkhari Javan et al.) in view of US Patent Publication 2023 0360255 A1, (Kocamaz et al.).
Claim 3
Regarding Claim 3, Roahstkhari Javan et al. teach the method according to claim 2, as noted above.
Roahstkhari Javan et al. do not explicitly teach all of setting a threshold using calculated variance.
[AltContent: textbox (Kocamaz et al. Fig. 6, showing overlapping bounding boxes that obscure.)]
PNG
media_image2.png
447
604
media_image2.png
Greyscale
However, Kocamaz et al. teach further comprising:
calculating a variance of the IoUs of the plurality of pairs of successive image frames ("Furthermore, the additional ( or alternate) fields may include, but are not limited to, a list of identifiers associated with the objects, a list of object classifications (with associated probabilities, in some examples), a list of object states ( e.g., stopped, moving, etc.), visibility/occlusion information, a list of confidences (variances in the locations, the velocity, the acceleration, and/or the like), timestamps associated with detections, and/or the like," paragraph [0031]); and
setting the threshold using the calculated variance ("In some examples, the prediction component 110 may use at least a threshold number of feature points 406 to predict the new state of the object 304(1)," paragraph [0055]).
Therefore, taking the teachings of Roahstkhari Javan et al. and Kocamaz et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Systems and Methods for Object Tracking and Localization in Videos with Adaptive Image Representation” as taught by Roahstkhari Javan et al. to use “Joint 2D and 3D Object Tracking for Autonomous Systems and Applications” as taught by Kocamaz et al. The suggestion/motivation for doing so would have been that, “The accuracy of object tracking plays a critical role in robust distance-to-object and object velocity estimations and serves to mitigate missed and false positive object detections. By mitigating missed and false positive object detections, these errors are prevented from propagating into planning and control functions of the autonomous and/or semi-autonomous system that make various decisions about control of an ego-machine.” as noted by the Kocamaz et al. disclosure in paragraph [0002], which also motivates combination because the combination would predictably have a lower mathematical load as there is a reasonable expectation that devices will have false positives and negatives and use lots of resources on both; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 6
Regarding claim 6, Roahstkhari Javan et al. teach the method according to claim 5, as noted above.
Roahstkhari Javan et al. do not explicitly teach all of setting a threshold using calculated variance.
However, Kocamaz et al. teach further comprising:
calculating a variance of the IoUs of the plurality of pairs of successive image frames ("Furthermore, the additional ( or alternate) fields may include, but are not limited to, a list of identifiers associated with the objects, a list of object classifications (with associated probabilities, in some examples), a list of object states ( e.g., stopped, moving, etc.), visibility/occlusion information, a list of confidences (variances in the locations, the velocity, the acceleration, and/or the like), timestamps associated with detections, and/or the like," paragraph [0031]), and
setting the threshold using the calculated variance ("In some examples, the prediction component 110 may use at least a threshold number of feature points 406 to predict the new state of the object 304(1)," paragraph [0055]).
Roahstkhari Javan et al. and Kocamaz et al. are combined as per claim 3.
Claim 9
Regarding claim 9, Roahstkhari Javan et al. teach the device according to claim 8, as noted above.
Roahstkhari Javan et al. do not explicitly teach all of setting a threshold using calculated variance.
However, Kocamaz et al. teach wherein the circuitry is further configured to execute:
a fourth calculating function configured to calculate a variance of the IoUs of the plurality of pairs of successive image frames ("Furthermore, the additional ( or alternate) fields may include, but are not limited to, a list of identifiers associated with the objects, a list of object classifications (with associated probabilities, in some examples), a list of object states ( e.g., stopped, moving, etc.), visibility/occlusion information, a list of confidences (variances in the locations, the velocity, the acceleration, and/or the like), timestamps associated with detections, and/or the like," paragraph [0031]); and
a setting function configured to set the threshold using the calculated variance ("In some examples, the prediction component 110 may use at least a threshold number of feature points 406 to predict the new state of the object 304(1)," paragraph [0055]).
Roahstkhari Javan et al. and Kocamaz et al. are combined as per claim 3.
Claim 12
Regarding claim 12, Roahstkhari Javan et al. teach the device according to claim 11, as noted above.
Roahstkhari Javan et al. do not explicitly teach all of setting a threshold using calculated variance.
However, Kocamaz et al. teach further including:
a fourth calculating function configured to calculate a variance of the IoUs of the plurality of pairs of successive image frames ("Furthermore, the additional ( or alternate) fields may include, but are not limited to, a list of identifiers associated with the objects, a list of object classifications (with associated probabilities, in some examples), a list of object states ( e.g., stopped, moving, etc.), visibility/occlusion information, a list of confidences (variances in the locations, the velocity, the acceleration, and/or the like), timestamps associated with detections, and/or the like," paragraph [0031]); and
a setting function configured to set the threshold using the calculated variance ("In some examples, the prediction component 110 may use at least a threshold number of feature points 406 to predict the new state of the object 304(1)," paragraph [0055]).
Roahstkhari Javan et al. and Kocamaz et al. are combined as per claim 3.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2021 0090284 A1 to Ning et al. discloses pose tracking, particularly for top-down, online, multi-person pose tracking. The system includes a computing device having a processor and a storage device storing computer executable code. The computer executable code, when executed at the processor, is configured to: provide a plurality of sequential frames of a video, the sequential frames comprising at least one keyframe and a plurality of non-keyframes; for each of the non-keyframes: receive a previous inference bounding box of an object inferred from a previous frame; estimate keypoints from the non-keyframe in an area defined by the previous inference bounding box to obtain estimated keypoints; determine object state based on the estimated keypoints, wherein the object state comprise a "tracked" state and a "lost" state; and when the object state is "tracked," infer an inference bounding box based on the estimated keypoints to process a frame next to the non-keyframe.
US Patent Publication 2021 0407107 A1 to Lee et al. discloses tracking moving objects depicted in multiple images. One of the methods includes determining, for an image captured by a camera, a first bounding box that represents a first moving object depicted in the image, determining that the first bounding box and a second bounding box overlap in an overlap area, determining that the first moving object represented by the first bounding box was farther from the camera that captured the image than a second moving object represented by the second bounding box, generating a mask for the first bounding box based on the overlap area, and determining, using data from the image that is associated with the mask, that the first moving object matches an appearance of another moving object depicted in another image captured by the camera.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Heath E. Wells/Examiner, Art Unit 2664
Date: 11 February 2026