DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-18 were pending for examination in the Application No. 18/470,782 filed September 20th, 2023. In the remarks and amendments received on December 12th, 2025, claims 1-4, 7, 10-11, and 18 are amended. Accordingly, claims 1-18 are currently pending for examination in the application.
Response to Amendment
Applicant’s amendments filed December 12th, 2025, to the Specification and Claims have overcome each and every objection and 35 U.S.C. § 101 rejection previously set forth in the Non-Final Office Action mailed October 1st, 2025. Accordingly, the objection(s) and 35 U.S.C. § 101 rejection(s) are withdrawn in response to the remarks and amendments filed. Examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure.
Response to Arguments
Applicant’s arguments filed December 12th, 2025, regarding the rejection(s) of the independent claim(s) have been fully considered but are not persuasive.
The examiner respectfully disagrees that “Nogami does not overcome the deficiencies of Krishnaswamy” to teach and/or suggest the newly amended independent claim limitation “wherein the target object recognition operation is based on the extracted target region and omits, based on the positioning information, a region of the original image outside of the target region” (see pg. 13 of Applicant’s Remarks).
As detailed in the current rejection below, para(s). [0043], [0047], [0083], [0089], and [0105] of Nogami teaches in the same field of endeavor of target object recognition based on positioning information the claim limitation “omits, based on the positioning information, a region of the original image outside of the target region during the target object recognition operation” as recognizing an object with an attached wireless tag (e.g., an “RFID tag”) from a plurality of other objects with attached wireless tags by recognizing the target region of the target object as the region within a sensing range of the wireless tag (i.e., a “RFID read range”) of the target object. The combination of the prior art does not change the principle operation of Krishnaswamy as Krishnaswamy also discloses determining target regions (i.e., “ROIs”) in an image based on wireless tags (e.g., “wireless unit[s]”) of target objects in an image and classifying (i.e., performing target recognition) the target objects based on the wireless tags. It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Krishnaswamy to incorporate omitting, based on the positioning information, a region of the original image outside of the target region during the target object recognition operation to improve target recognition within the target recognition operation by distinguishing a singular target object from other detected objects in the image even if the image region of the target object (i.e., “ROI”) is slightly obstructed by (i.e., overlapping with) image regions of another object, target object, or surrounding other object, such that feature information of the target object obtained from the wireless tag of the target object can be compared with features of the target object obtained from the target region in the image for all target objects in the image as taught by Nogami in para(s). [0054-0055] (see the rejection of claim 1 below).
Priority (Previously Presented)
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed as foreign Patent Application No. CN 202311153118.3, filed on September 7th, 2023.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnaswamy et al. (Krishnaswamy; US 2018/0276841 A1) in view of Nogami et al. (Nogami; US 2006/0022814 A1).
Regarding claim 1, Krishnaswamy discloses a recognition system, comprising:
processing circuitry (para(s). [0022], recite(s)
[0022] “The material disclosed herein also may be implemented as instructions stored on a machine-readable medium or memory, which may be read and executed by one or more processors. …”
) configured to
acquire an original image of a target object (para(s). [0044], recite(s)
[0044] “Process 300 may include “obtaining image data of at least one image of a video sequence and comprising at least one object region that is a picture of an object and to be segmented from other areas on the at least one image or identified or both” 302. …”
);
acquire positioning information of the target object, the positioning information of the target object including information about the target object’s position in physical space (para(s). [0028-0029] and [0091], recite(s)
[0028] “To resolve these issues, the present method of object detection and segmentation uses the angle of transmission in a short-range wireless data transmission network in order to determine object positions within the field of view (FOV) or image of an image capture device (or camera) also referred to as a local or control device. …”
[0029] “The angular direction of the transmission can be used by measuring the distance of the transmission along the angle of transmission, and once the distance of the transmission is determined, the components of the transmission can be determined to determine the real world position of the peer device (object) relative to the local device so that the object can be correctly position on the FOV of the screen of the local device. …”
[0091] “… a process 900 is provided for a method of determining object positions for image processing using wireless network angle of transmission, and including operations for object detection. …”
, where “object positions” are positioning information of target objects in physical space);
extract a target region in the original image based on the positioning information (para(s). [0049] and [0092], recite(s)
[0049] “Once the position of the object in the FOV is known and the object identity is obtained, segmentation of the entire image alone, or segmentation and object detection can be performed. This may include using the bounding box of each object as a single region or ROI. …”
[0092] “… Relevant here, process 900 may include “determine positions of bounded boxes of at least one object with wireless network unit using the AOT of the unit” 906. Thus, the result is segmentation of one or more objects with wireless network units and that are segmented by using the angle of transmission. …”
, where a “bounding box”, “region”, or “ROI” is extracting (e.g., ‘segmenting’) a target region in the original based on the positioning information (e.g., “position of the object in the FOV” or “angle of transmission [AOT]”)); and
perform a target object recognition operation to recognize the target object, wherein the target object recognition operation is based on the extracted target region (para(s). [0049] and [0093], recite(s)
[0049] “… All identified ROIs are then provided to a machine learning classifier for object detection that classifies each object for its final identification. …”
[0093] “Then, process 900 may include “classify the objects using ROIs in machine learning classifier” 910. By one form, images of the ROIs are provided as the input to a machine learning classifier performing the object detection as described in detail above. …”
, where “classify[ing] the objects” using the “identified ROIs” is recognizing at least the target object based on the extracted target region by performing a target object recognition operation (e.g., “object detection that classifies each object for its final identification”))
Where Krishnaswamy does not specifically disclose
and omits, based on the positioning information, a region of the original image outside of the target region during the target object recognition operation;
Nogami teaches in the same field of endeavor target object recognition based on positioning information
and omits, based on the positioning information, a region of the original image outside of the target region during the target object recognition operation (para(s). [0043], [0047], [0083], [0089], and [0105], recite(s)
[0043] “With this arrangement, it is possible to discriminate between objects to which RFID tags are attached, without adding any special function to these RFID tags. An example of an operation of determining the positions of a plurality of objects according to the first embodiment will be described below with reference to FIGS. 2, 3, 4, and 5.”
[0047] “In the object information acquisition system 200, when the image sensing unit 101 senses image (step S10 in FIG. 2), the RFID reader 102 reads information (ID data) of the two RFID tags 20 in substantially the same range (step S30). Data of the characteristic quantities of objects corresponding to the ID data read from the RFID tags 20 and information (in this case, information of the name and the date of manufacture) unique to the objects are acquired from the database 104 (step S40).”
[0083] “The fourth embodiment has been made in consideration of the above problems, and has as its object to match a sensing range with an RFID read range by controlling sensing parameters and RFID read parameters by synchronizing them with each other, when an RFID tag attached to a certain object is to be read while the object is being shot.”
[0089] “The RFID reader 520 has a directional antenna 521 for communicating by radio only with RFID tags in a specific direction. The RFID reader 520 can control the range (to be referred to as an antenna direction range hereinafter) within which a radio field emitted from the directional antenna 521 radiates, and the antenna output level (the radiant intensity or reception intensity of an electromagnetic wave) having influence on the longitudinal coverage of the radio field.”
[0105] “In the above arrangement, the parameter controller 530 controls the image sensing unit 510 and RFID reader 520 by synchronizing them with each other, so that a sensing range 11 in which the image sensing unit 510 senses an image and an RFID read range 12 in which the RFID reader 520 reads an RFID tag are substantially the same. In this way, only one object B can be contained in the sensing range 11 and RFID read range 12.”
, where “discrminat[ing] between objects to which RFID tags are attached” by at least “match[ing] a sensing range with an RFID read range” for an object in an image is omitting a region of the original image outside of the target region (i.e., only considering regions within a “RFID read range” for a target object) is omitting regions (e.g., other “RFID read range[s]” of other objects with “RFID tags” attached) outside of a target region (e.g., “RFID read range” of an object undergoing current recognition) of a target object based on position information (e.g., “positions of a plurality objects” based on “RFID read range”) during a target object recognition operation (i.e., “discrminat[ing] between objects to which RFID tags are attached” such that “only one object …can be contained in the sensing range 11 and RFID read range 12”)).
Since Krishnaswamy and Nogami each disclose identifying information of the target object based on a wireless network unit (e.g., the “wireless unit” in Krishnaswamy and the “RFID tag” in Nogami), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Krishnaswamy to incorporate omitting, based on the positioning information, a region of the original image outside of the target region during the target object recognition operation to distinguish a target region of a target object with a wireless unit from other regions of other objects with wireless units even if the target object is slightly obstructed by another object or by a surrounding other object as taught by Nogami (para(s). [0054-0055], recite(s)
[0054] “In an actual operation, an object rarely independently exists in an image without being obstructed by anything. However, it is possible to detect the position of even an object slightly obstructed by another object or by a surrounding irrelevant article. That is, if even a portion of an object exists in a sensed image, the characteristic quantities of the portion are compared with those of an object obtained from the database 104. If the characteristic quantities are partially similar, it is determined that the object exits in this portion. In this manner, even when an RFID tag itself is completely invisible from the reader, the position of the object can be determined. That is, in the conventional technique which determines a position by using light, an RFID tag must be located in a position visible from the reader. In the first embodiment, however, it is possible to read an RFID tag intentionally located in a hidden position, or to read an RFID tag in a direction other than a surface to which the tag is attached.”
[0055] “FIGS. 7, 8, and 9 are views showing the way an object partially obstructed by another object is detected. FIG. 7 is a view showing the state in which objects shot by the image sensing unit 101 are displayed on the display unit 106. The RFID reader 102 reads RFID tags (not shown) attached to two objects 210 and 220. The RFID tag of the object 210 has ID data "00100", and the RFID tag of the object 220 has ID data "00101". The characteristic quantities of objects corresponding to these ID data and data (name data) unique to these objects are read out from the database 104 (FIG. 9). The characteristic quantities of the object 210 are "black, circular column". Therefore, the position of the object in the image can be simply determined by comparison with characteristic quantities obtained by processing the shot image.”
).
Regarding claim 2, Krishnaswamy discloses the recognition system according to claim 1, wherein the processing circuitry is further configured to
acquire information about the target object (para(s). [0095], recite(s)
[0095] “By one option, process 900 may include “store object class data of object in metadata of wireless network unit of object, and for individual or each object” 912. By this form, the identification of the object as the result of the object detection may be stored as metadata at the associated object. In this case, the image of the ROI may be annotated with the label of the object identification and then stored for future object detection operations. …”
, where the “object class data of [an] object in metadata of wireless network unit of [the] object” is information acquired about the target object), and
wherein the recognizing of the target object includes
identifying (para(s). [0095]—see citation in claim 2 limitation “acquire information… above—, where the “object class data of [an] object in metadata of wireless network unit of [the] object” is identifying information of the target object based on the extracted target region (e.g., the “ROI”)), and
recognizing the target object by(para(s). [0095] further recite(s):
[0095] “…With this arrangement, the next time a local device places the object region of an object into an FOV of the local device, the object identification is already present and object detection will be more accurate due to the annotation of the ROI with the class label. As mentioned, the classifier considers the label when scoring the available classes. The accuracy of the object classification is improved by using this method because zero false positives in regions with objects pre-annotated based on the AoT and from the wireless network unit of the object are reduced, and which otherwise may be mis-classified by the network.”
, where “object identification” or “object classification” is recognizing the target object by the information about the target object with the feature information).
Where Krishnaswamy does not specifically disclose
identifying feature information of the target object based on the… target region; and
recognizing the target object by comparing the information about the target object with the feature information;
Nogami teaches in the same field of endeavor of target object recognition based on positioning information
identifying feature information of the target object based on the… target region (para(s). [0041], recite(s)
[0041] “A characteristic quantity comparator 105 acquires, from the database 104, only information corresponding to ID data of an RFID tag 20 read by the RFID reader 102. The acquired information contains characteristic quantity information of an object to which the RFID tag is attached, and information (e.g., the name) unique to the object. The characteristic quantity comparator 105 compares the characteristic quantities of an object acquired from the database 104 with the characteristic quantities of a sensed image acquired by the image processor 103, and determines the position of the object in the sensed image. The characteristic quantity comparison sequence is as follows. That is, the characteristic quantities, such as the edge, the color, the luminance, the texture, the presence/absence of a motion, the shape, and the distance between characteristic points, unique to the object acquired from the database 104 are compared with the characteristic quantities, such as the edge, the color, the luminance, the texture, the presence/absence of a motion, the shape, and the distance between characteristic points, of the sensed image obtained by the image processor 103, and a most matching portion is checked. If a matching portion is found, the ID data of the object and the position in the image are stored so that they correspond to each other. If a plurality of ID data is read by the RFID reader 102, the above sequence is repeated the same number of times as the number of the read ID data. As a consequence, a plurality of objects to which RFID tags are attached can be discriminated in the image. Note that if a plurality of ID data is read, information may also be simultaneously acquired from the database 104.”
, where the “acquired information contain[ing] characteristic quantity information” is identified feature information of the target object (e.g., “sensed object”)); and
recognizing the target object by comparing the information about the target object with the feature information (para(s). [0041]—see preceding citation above—, where identifying the object in the “sensed image” by comparing or matching “characteristic quantities” acquired from the wireless network unite (e.g., the “RFID”) is recognizing the target object by comparing the information about the target object (e.g., the “acquired information contain[ing] characteristic quantity information”) with the feature information (e.g., the “characteristic quantities… of the sensed image”)).
Since Krishnaswamy and Nogami each disclose identifying information of the target object based on a wireless network unit (e.g., the “wireless unit” in Krishnaswamy and the “RFID tag” in Nogami), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Krishnaswamy to incorporate identifying feature information of the target object based on the extracted target region and recognizing the target object by comparing the information about the target object with the feature information of the target object based on the extracted target region to improve target object recognition by distinguishing the target object from other objects even if the target object is slightly obstructed by another object or by a surrounding other object as taught by Nogami (para(s). [0054-0055], recite(s)
[0054] “In an actual operation, an object rarely independently exists in an image without being obstructed by anything. However, it is possible to detect the position of even an object slightly obstructed by another object or by a surrounding irrelevant article. That is, if even a portion of an object exists in a sensed image, the characteristic quantities of the portion are compared with those of an object obtained from the database 104. If the characteristic quantities are partially similar, it is determined that the object exits in this portion. In this manner, even when an RFID tag itself is completely invisible from the reader, the position of the object can be determined. That is, in the conventional technique which determines a position by using light, an RFID tag must be located in a position visible from the reader. In the first embodiment, however, it is possible to read an RFID tag intentionally located in a hidden position, or to read an RFID tag in a direction other than a surface to which the tag is attached.”
[0055] “FIGS. 7, 8, and 9 are views showing the way an object partially obstructed by another object is detected. FIG. 7 is a view showing the state in which objects shot by the image sensing unit 101 are displayed on the display unit 106. The RFID reader 102 reads RFID tags (not shown) attached to two objects 210 and 220. The RFID tag of the object 210 has ID data "00100", and the RFID tag of the object 220 has ID data "00101". The characteristic quantities of objects corresponding to these ID data and data (name data) unique to these objects are read out from the database 104 (FIG. 9). The characteristic quantities of the object 210 are "black, circular column". Therefore, the position of the object in the image can be simply determined by comparison with characteristic quantities obtained by processing the shot image.”
).
Regarding claim 3, Krishnaswamy discloses the recognition system according to claim 1, wherein the recognizing of the target object includes
identifying (para(s). [0095]—see similar limitation in claim 2 above—, where the “object class data of [an] object in metadata of wireless network unit of [the] object” is identifying information of the target object based on the extracted target region (e.g., the “ROI”)), and
recognizing the target object by(para(s). [0095]—see citation in claim 2 limitation “recognizing the target object by…” above—, where “object identification” or “object classification” is recognizing the target object by the information of the target region extracted based on the positioning information).
Where Krishnaswamy does not specifically disclose
identifying feature information of the target object based on the… target region; and
recognizing the target object by comparing the feature information with a feature of the target region…;
Nogami teaches in the same field of endeavor of target object recognition based on positioning information
identifying feature information of the target object based on the… target region (para(s). [0041]—see similar limitation in claim 2 above—, where the “acquired information contain[ing] characteristic quantity information” is identified feature information of the target object (e.g., “sensed object”)); and
recognizing the target object by comparing the feature information with a feature of the target region… (para(s). [0041]—see similar limitation in claim 2 above—, where identifying the object in the “sensed image” by comparing or matching “characteristic quantities” is recognizing the target object by comparing the feature information (e.g., the “acquired information contain[ing] characteristic quantity information”) with a feature of the target region (e.g., “characteristic quantities… of the sensed image”)).
Since Krishnaswamy and Nogami each disclose identifying information of the target object based on a wireless network unit (e.g., the “wireless unit” in Krishnaswamy and the “RFID tag” in Nogami), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Krishnaswamy to incorporate identifying feature information of the target object based on the extracted target region and recognizing the target object by comparing the feature information with a feature of the target region extracted based on the positioning information to improve target object recognition by distinguishing the target object from other objects even if the target object is slightly obstructed by another object or by a surrounding other object as taught by Nogami (para(s). [0054-0055]—see citations in similar motivation to combine paragraph of claim 2 above).
Regarding claim 4, Krishnaswamy in view of Nogami discloses the recognition system according to claim 3, wherein Nogami further teaches
the feature information comprises at least one of a size and a location of the target object in the original image (para(s). [0066], [0083], and [0105], recite(s)
[0066] “FIGS. 11A and 11B illustrate cases in each of which an RFID tag 20 is read while the distance between an object 310 to which the RFID tag 20 is attached and the object information acquisition system 300 is changed. As shown in FIG. 11A, if radio field intensity 340 from the RFID tag 20 is low, the object 310 is far from the RFID reader 102. Therefore, if the image sensing unit 101 is installed adjacent to the RFID reader 102, the object 310 is far from the image sensing unit 101 as well. That is, the object is presumably sensed in a relatively small size in an image, so the characteristic quantity of the object 310 obtained from the image is also small. Accordingly, the reduction ratio of the characteristic quantity of the object is obtained from the radio field intensity and correspondence table, a characteristic quantity 312 of the object 310 read out from the database 104 is reduced to obtain a characteristic quantity 313, and the characteristic quantity 313 is compared with the characteristic quantity of an object 311 sensed in a small size. This facilitates the comparison. On the other hand, if the radio field intensity from the RFID tag 20 is high, as shown in FIG. 11B, the object is sensed in a large size in the sensed image. Therefore, the characteristic quantity 312 of the object 310 read out from the database 104 is enlarged to form a characteristic quantity 314, and the characteristic quantity 314 is compared with the characteristic quantity of the object 311 sensed in a large size.”
[0083] “…its object to match a sensing range with an RFID read range by controlling sensing parameters and RFID read parameters by synchronizing them with each other, when an RFID tag attached to a certain object is to be read while the object is being shot.”
[0105] “In the above arrangement, the parameter controller 530 controls the image sensing unit 510 and RFID reader 520 by synchronizing them with each other, so that a sensing range 11 in which the image sensing unit 510 senses an image and an RFID read range 12 in which the RFID reader 520 reads an RFID tag are substantially the same. In this way, only one object B can be contained in the sensing range 11 and RFID read range 12.”
, where comparing the characteristic quantities including “shape” (see para. [0041] in claim 3 above) include identifying the “size” of the target object in the database of “characteristic quantit[ies]” and determining the position of the target object in the image includes identifying the area where the target object is located based on the wireless network unit (e.g., a “RFID read range”) is the feature information further comprising at least one of a size and a location of the target object, respectively),
the feature of the extracted target region comprises at least one of a size and a location of the extracted target region (para(s). [0066], [0083], and [0105]—see preceding citation above—, where the “size” of the target object in the image and the image “sensing range” of the target object in the image are the features of the target object comprising at least one of a size and a location of the extracted target region, respectively (i.e., the size and location of the target object is the same as the size and location of the extracted target region as Krishnaswamy discloses the extracted target region is a region corresponding to the dimensions of the target object—see claim 1 limitation “extract a target region…” above—)), and
the recognizing of the target object includes comparing at least one of the size and the position of the target object in the original image with the corresponding at least one of the size and the position of the extracted target region (para(s). [0066], [0083], and [0105]—see preceding citations above—, where comparing the characteristic quantities including “shape” (see para. [0041] in claim 3 above) include identifying the “size” of the target object in the database of “characteristic quantit[ies]” and matching the position of the target object in the image with the position of the target object identified based on the wireless network unit is recognizing the target object by comparing at least one of the size and the position of the target object in the original image (e.g., the “size” and “sensing range” of the target object in the image, respectively) with the corresponding at least one of the size and the position of the extracted target region (e.g., the “size” and the “sensing range” based on the wireless network unit—the RFID—, respectively)).
Regarding claim 5, Krishnaswamy in view of Nogami discloses the recognition system according to claim 1, wherein Krishnaswamy further discloses the extract the target region includes
determining, based on the positioning information, a position and a size of the target region in the original image (para(s). [0049] and [0092]—see citations in claim 1 limitation “extract a target region…” above—, where para(s). [0070] further recite(s):
[0070] “Either way, the final position of the object region on the FOV is now complete, where the position includes the location of the object region, or particularly the second network unit on the FOV as an anchor point scaled from coordinates of the second network unit received form the object, the size of the object as determined from dimensions received from the object, and the local orientation of the object calculated from the received global orientation of the object.”
, where the “position” of the target object includes the “location of the object region” and the “size of the object as determined from dimensions received from the object” is determining a position and size of the target region in the original image), and
extracting the target region based on the position and the size of the target region (para(s). [0071], recite(s)
[0071] “The final position, given the dimensions received from the object, may be in single pixel accuracy, or even sub-pixel accuracy (which may be rounded as needed). Once set on the FOV, the outer boundary of the object region is set as the bounding box of the object region as shown around the object regions in the FOV of the local device 200 (FIG. 2). …”
, where the “object region is set as the bounding box of the object region” is extracting the target region based on the position and the size of the target object).
Regarding claim 6, Krishnaswamy in view of Nogami discloses the recognition system according to claim 1, wherein Krishnaswamy further discloses the processing circuitry is further configured to
determine whether there is a plurality of target objects in the original image (para(s). [0033], recite(s)
[0033] “The use of the wireless network also provides the opportunity to increase efficiency in other ways. The use of the angle of transmission between the local device and individual objects permits a system to determine the position and identification of each of these multiple objects in an image (or scene or FOV). …The more objects are positioned and identified, the more efficient is the system with less conventional segmentation. Since the segmentation provides the regions of interest (ROIs) for the object detection, this also increases the efficiency of the object detection.”
, where determining “multiple objects in an image” is determining there is a plurality of target objects in the original image);
extract a plurality of target regions in the original image based on a plurality of positioning information of the plurality of target objects respectively based on a determination that there is the plurality of target objects, the plurality of target regions including at least one corresponding target object of the plurality of target objects (para(s). [0072] and [0092], recite(s)
[0072] “By one option, process 400 may include “perform segmentation using the bounding box” 416. One or more objects in the image, each with its own bounding box for example, are considered a final segmentation. …”
[0092] “… Then, process 900 may include “perform object region proposal generation to determine candidate regions of interest (ROIs)” 904, and here this may include the same or similar operations performed by segmentation process 800 as one example. Relevant here, process 900 may include “determine positions of bounded boxes of at least one object with wireless network unit using the AOT of the unit” 906. Thus, the result is segmentation of one or more objects with wireless network units and that are segmented by using the angle of transmission. Also, process 900 may include “determine position of other objects in image without a wireless network unit” 908, and these other objects of the same image of FOV may be segmented by conventional methods as with process 800. Both may or may not have box-shaped bounding boxes that define the outer boundaries of the objects. These objects or ROIs are then provided for object detection. …”
, where the “segmentation using the bounding box” for at least “more objects in the image” is extracting a plurality of target regions in the original image based on a plurality of positioning information of the plurality of target objects respectively based on a determination that there is the plurality of target objects (e.g., “object positions” of each object as recited in para(s). [0028-0029] and [0091]—see citation in claim 1 limitation “acquire positioning information…” above—); and the “candidate regions of interest (ROIs)” are a the plurality of target regions including at least one corresponding target object of the plurality of target objects),
merge adjacent target regions, of the plurality of target regions, into a merged target region (para(s). [0083], recite(s)
[0083] “Process 800 may include “determine similarities between regions and merge regions to form final regions” 814. For this operation, numerous small regions may be merged to establish large similar regions that should be part of the same object. This may include a second pass of the same algorithms used to form the first rough regions, and/or connected component analysis techniques.”
),
use the merged target region and a remainder of the plurality of target regions as target regions for recognition (para(s). [0092]—see citation above—, where providing the “objects or ROIS… for object detection” is using the merged target region (e.g., “final regions” or refined target regions) and a remainder of the plurality of target regions (e.g., target regions not merged) as target regions for recognition (e.g., “object detection” or classification)), and
recognize at least one of the plurality of target objects based on the target regions for recognition (para(s). [0093], recites
[0093] “Then, process 900 may include “classify the objects using ROIs in machine learning classifier” 910. By one form, images of the ROIs are provided as the input to a machine learning classifier performing the object detection as described in detail above. …”
, where “classify[ing] the objects using ROIs” is recognizing at least one of the plurality of target objects (e.g., a ROI) based on the target regions for recognition).
Regarding claim 7, Krishnaswamy discloses the recognition system according to claim 6, wherein the recognizing of the at least one of the plurality of target objects includes
identifying a plurality of (para(s). [0095]—see similar limitation in claim 2 above—, where identifying information of a target object (e.g., the “object class data of [an] object in metadata of wireless network unit of [the] object”) based on a target region for recognition (e.g., an “ROI”) for multiple objects (see para. [0033] in claim 6 limitation “determine whether there is a plurality of target objects…” above) is identifying a plurality of information of the plurality of target objects based on the target regions for recognition (e.g., the “ROIs”)), and
recognizing the at least one of the plurality of target objects by(para(s). [0095]—see citation in claim 2 limitation “recognizing the target object by…” above—and para. [0093]—see claim 6 limitation “recognizing at least one of the plurality of target regions… above— , where performing “object identification” or “object classification” for each object based on the ROIs is recognizing at least one of the plurality of target objects by the information of the plurality of target regions extracted based on the positioning information).
Where Krishnaswamy does not specifically disclose
identifying a plurality of feature information of the plurality of target objects…; and
recognizing the at least one of the plurality of target objects by comparing the plurality of feature information with features of the plurality of target regions…;
Nogami teaches in the same field of endeavor of target object recognition based on positioning information
identifying a plurality of feature information of the plurality of target objects… (para(s). [0041]—see claim 2 limitation “identifying feature information…” above—, where para(s). [0112] further recite(s):
[0112] “…Note that if a plurality of objects is sensed, it is also possible to perform sensing again.”
, where the identified feature information (e.g., “acquired information contain[ing] characteristic quantity information”) of a target object (e.g., “sensed object”) can be performed for a plurality of target objects (e.g., “plurality of [sensed] objects”) is identifying a plurality of feature information of the plurality of target objects); and
recognizing the at least one of the plurality of target objects by comparing the plurality of feature information with features of the plurality of target regions… (para(s). [0041]—see claim 2 limitation “recognizing the target object by…” above—, where para(s). [0055] further recite(s):
[0055] “FIGS. 7, 8, and 9 are views showing the way an object partially obstructed by another object is detected. FIG. 7 is a view showing the state in which objects shot by the image sensing unit 101 are displayed on the display unit 106. The RFID reader 102 reads RFID tags (not shown) attached to two objects 210 and 220. The RFID tag of the object 210 has ID data "00100", and the RFID tag of the object 220 has ID data "00101". The characteristic quantities of objects corresponding to these ID data and data (name data) unique to these objects are read out from the database 104 (FIG. 9). The characteristic quantities of the object 210 are "black, circular column". Therefore, the position of the object in the image can be simply determined by comparison with characteristic quantities obtained by processing the shot image.”
, where “discrminat[ing]” a “plurality of objects to which RFID tags are attached” is recognizing at least one of the plurality of target objects by comparing the plurality of feature information (e.g., the “acquired information contain[ing] characteristic quantity information”) with features of the plurality of target regions (e.g., the “characteristic quantity information” of the objects sensed in the image)).
Since Krishnaswamy and Nogami each disclose identifying information of the target object based on a wireless network unit (e.g., the “wireless unit” in Krishnaswamy and the “RFID tag” in Nogami), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Krishnaswamy to incorporate identifying a plurality of feature information of the plurality of target objects based on the target regions for recognition and recognizing at least one of the plurality of target objects by comparing the plurality of feature information with features of the plurality of target regions extracted based on the plurality of positioning information to improve target object recognition by distinguishing the target object from other objects even if the target object is slightly obstructed by another object or by a surrounding other object as taught by Nogami (para(s). [0054-0055]—see citations in similar motivation to combine paragraph of claim 2 above).
Regarding claim 8, Krishnaswamy in view of Nogami discloses the recognition system of claim 1, wherein Krishnaswamy further discloses the acquire positioning information includes communicating with the target object using radio identification technology (para(s). [0028]—see citation in claim 1 limitation “acquire positioning information…” above—, where para(s). [0028], [0054], and [0130] further recite(s):
[0028] “… A number of electronic devices that have image-based features including segmentation and object detection also have short-range wireless transmission capability such as with a Bluetooth® (BT) network to receive data from a peer or object with a wireless network (or BT) chip (or other network transceiver structure) referred to herein as a wireless network unit. Bluetooth® is known to have an angle of arrival (AoA) and angle of departure (AoD) that respectively indicates both an angular direction to or from a network point (which may be a transmitter or receiver or transceiver device). …”
[0054] “Process 400 may include “pair local device to at least one object shown in an image and on a wireless network” 404. As mentioned, the wireless network may be radio networks such as a short-wavelength radio network, may be a short-range network, and by one example, may be a Bluetooth® network with specifications according to Bluetooth 5, by one example (See, www.bluetooth.com/specifications). …”
[0130] “… a peer device or object as described above. The system 1500 may include a wireless network unit 1502, such as a BT chip. Alternatively, system 1500 is a BT chip or other wireless network chip. …”
, where the “wireless network unit” includes radio identification technology (e.g., “Bluetooth® (BT)”)).
Regarding claim 9, Krishnaswamy in view of Nogami discloses the recognition system of claim 8, wherein Krishnaswamy further discloses the radio identification technology includes at least one of radio-frequency identification (RFID), Bluetooth technology, ultra-wideband (UWB) technology, or wireless fidelity (WiFi) technology (para(s). [0028], [0054], and [0130]—see citations in claim 8 above—, where the radio identification technology includes at least Bluetooth technology).
Regarding claim 10, Krishnaswamy in view of Nogami discloses the recognition system of claim 1, wherein Krishnaswamy further discloses
the acquire the original image includes using an image capture device (para(s). [0028]—see citation in claim 1 limitation “acquire positioning information…” above—, where the “image capture device (or camera)” is an image capture device), and
the acquire positioning information includes acquiring information regarding at least one of a distance, an elevation, or an azimuth angle of the target object in relation the image capture device (para(s). [0028-0029] and [0091]—see citation in claim 1 limitation “acquire positioning information…” above—, where para(s). [0059] further recite(s):
[0060] “Referring to FIG. 6, a diagram 600 of the cuboid 510 shows a distance D that is the actual linear (or line-of-sight) distance between the wireless network unit 508 at the object and the wireless network unit 506 at the local device (or camera FOV), and represents the angle of Transmission (or arrival or departure depending on which unit is being referred to). …”
PNG
media_image1.png
308
435
media_image1.png
Greyscale
, where the positioning information includes at least a distance (e.g., “D” in Fig. 6) an elevation (e.g., “h” in Fig. 6) and an azimuth angle (e.g., “
α
” in Fig. 6) of the target object (e.g., “wireless network unit 508 at the object”) in relation the image capture device (e.g., “wireless unit 506 at the local device (or camera FOV)”)).
Regarding claim 11, the claim is the method performed by the system of claim 1. Therefore, claim 11 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above).
Regarding claim 12, the claim recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above).
Regarding claim 13, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above).
Regarding claim 14, the claim recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above).
Regarding claim 15, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above).
Regarding claim 16, the claim recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above).
Regarding claim 17, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above).
Regarding claim 18, Krishnaswamy in view of Nogami discloses computer readable storage medium, wherein Krishnaswamy further discloses the computer readable storage medium stores computer program instructions thereon, the computer program instructions, when executed by a processor, cause the processor to implement the method of claim 11 (para(s). [0022]—see citation in claim 1 limitation “processing circuitry” above—, where a “machine-readable medium or memory” is a computer readable storage medium).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Nogami et al. (JP 2006040035 A) discloses in the abstract and description, para(s). [0035]:
[abstract] “When ID data is read from multiple RFID tags using an RFID with a simple configuration, it is difficult to determine from which RFID tag information is being acquired. When reading a plurality of objects (10) to which RFID tags (20) are attached, an image capturing unit (101) captures an image and an RFID reading unit (102) simultaneously reads the RFID tags (20) within the image capturing range. The object to which each RFID tag is attached in the image is identified by comparing the feature values (appearance model) of the object obtained from the database 104 based on the ID data of the RFID tag with the feature values obtained by image processing of the captured image. Furthermore, by providing a display unit 106 that displays information specific to the object at the position where the RFID tag is located in the image, an RFID reading system that is easy for humans to understand and has many applications is provided.”
[0035] “…In the second embodiment, the distance between the object information acquisition system device 100 and the RFID tag is estimated from the strength of the radio wave received from the RFID tag, and the size of the object in the image is estimated, and feature matching is performed.”
Hollar et al. (US 2021/0304577 A1) discloses in the abstract and para(s). [0104]:
[abstract] “Real time location systems are provided including one or more ultra-wideband (UWB) sensors positioned in an environment; one or more image capture sensors positioned in the environment; and at least one UWB tag associated with an object in the environment to provide a tagged item in the environment. The one or more UWB sensors and the one or more image capture sensors are integrated into at least one location device. The at one location device includes a UWB location device, a combination UWB/camera location device and/or a camera location device. A location of the tagged item is tracked using the at least one location device. and wherein a location of the tagged item is tracked using the at least one location device.”
[0104] “In FIG. 10B, the camera search area 1033 may not have been reduced, but with the additional information, the camera can accurately associate the tag with an object. Two additional methods will be discussed herein. In a first method, the system applies vision processing to recognize objects within the searchable area of the camera image as illustrated in FIG. 10B. As noted, three boxes 1028 , 1025 , and 1027 are identified. If the boxes are somehow overlapping or if the size of the box in the image correlates with the distance it is away from the camera, a determination can be made as to which boxes are nearer to the camera and which ones are farther. In these embodiments, the UWB tag 1005 was determined to be between UWB tag 1007 and UWB tag 1008 based on the ToF measurements. Within the captured image 1021 of FIG. 10B, the system can correctly identify the middle box 1025 and therefore have an association with UWB tag 1005 . In some embodiments, the camera sensor is a stereo vision camera system or a scanning LiDAR unit capable of measuring depth. In these embodiments, the depth equates to a distance measured from the camera sensor. Matching the depth camera distance with the distance of the ToF measurement allows the removal of all but one of the boxes as viable candidates for accurate paring of a UWB tag with the associated object from the captured image.”
Baeg et al. (KR 100920457 B1) discloses in the abstract and pg. 11 of the description:
[abstract] “The present invention provides a method and device for recognizing objects having RFID tags attached thereto to support a service robot to provide reliable services. Unlike conventional methods that extract feature points from a video image obtained through a camera attached to a robot and recognize an object through them, the present invention recognizes an object using an RFID code detected through an RFID reader attached to a service robot, accesses a database to obtain video object recognition information corresponding to the detected code, and then performs object recognition based on that information, thereby enabling more robust and reliable object recognition. Accordingly, the object recognition method and system according to the present invention enables a service robot to reliably recognize and manipulate objects with a simple hardware and software configuration, thereby improving the quality of service while lowering the production cost of the service robot.”
[pg. 11 of description] “…The object recognition process, as illustrated in Fig. 6, is broadly divided into a first stage in which the image information captured by the mobile robot is converted by applying the Gaussian Color Model (GCM) using a color information descriptor transmitted from the database, and the image information is filtered by DC0-1 to perform blob detection to select candidate elixirs containing objects, a second stage in which the blobs in which information from DC0 to 1 appears in common are selected and processed to select image areas with a high probability of being objects, and a third stage in which the target object is finally selected using the aforementioned EHD, and is specifically performed as follows. ….”
Flick et al. (US 2022/0004775 A1) discloses in the abstract and para(s). [0178] and [0181]:
[abstract] “A system for situational awareness monitoring within an environment, wherein the system includes one or more processing devices configured to receive an image stream including a plurality of captured images from each of a plurality of imaging devices, the plurality of imaging devices being configured to capture images of objects within the environment and at least some of the imaging devices being positioned within the environment to have at least partially overlapping fields of view, identify overlapping images in the different image streams, the overlapping images being images captured by imaging devices having overlapping fields of view, analyse the overlapping images to determine object locations within the environment, analyse changes in the object locations over time to determine object movements within the environment, compare the object movements to situational awareness rules and use results of the comparison to identify situational awareness events.”
[0178] “At step 814 , synchronous overlapping images are identified by identifying images from different image streams that were captured substantially simultaneously, and which include objects captured from different viewpoints. Identification of overlapping images can be performed using the extrinsic calibration data, allowing cameras with overlapping field of view to be identified, and can also involve analysis of images including objects to identify the same object in the different images. This can examine the presence of machine readable coded data, such as April Tags within the image, or can use recognition techniques to identify characteristics of the objects, such as object colours, size, shape, or the like.”
[0181] “In this example, at step 900 , the server 310 analyses one or more images of the object, using image processing techniques, and ascertains whether the image includes visual coded data, such as an April Tag at step 905 . If the server identifies an April Tag or other visual coded data, this is analysed to determine an identifier associated with the object at step 910 . An association between the identifier and the object is typically stored as object data in a database when the coded data is initially allocated to the object, for example during a set-up process when an April tag is attached to the object. Accordingly, decoding the identifier from the machine readable coded data allows the identity of the object to be retrieved from the stored object data, thereby allowing the object to be identified at step 915 .”
Kondo et al. (US 2009/0066513 A1) discloses in the abstract and para(s). [0228] and [0257]:
[abstract] “When object detection means, for detecting ID information and a position of an object from outputs of a wireless tag reader, a human detection sensor, and a camera, determines that data relating to first object ID information and data relating to second object ID information, corresponding respectively to first time and second time on which human detection data indicating a presence of a human is obtained, are different from each other, the object detection means calculates a difference between first image data and second image data corresponding to the respective times to thereby detect the object position.”
[0228] “Next, in the step SA 7 to be carried out by the image data selecting unit 504 , image data to be used for detecting the object position is selected. From the data of FIG. 3, it is determined by the object ID comparing unit 503 that in the time zone TZ 1 , the wireless tag reader 101 detects data relating to the ID information of the object A, and in the time zone TZ 2 , the wireless tag reader 102 detects data relating to the object ID information indicating that ID information of the object has not been detected. That is, in the time zone TZ 1 , it can be estimated by the object ID comparing unit 503 that the object A placed within the detection range 90 a of the wireless tag reader 101 or held by a human within the detection range 90 a of the wireless tag reader 101 is moved to the outside of the detection range of the wireless tag reader during the time from 3 to 13 seconds.”
[0257] “As in the first embodiment, in the room RM, the three wireless tag readers 101 to 103 and the three antennas 111 to 112 of the three wireless tag readers 101 to 103 are provided. The detection ranges of the wireless tag readers 101 to 103 are expressed with circles 90 a , 90 b , and 90 c drawn by dotted lines. Particularly, the wireless tag reader 101 is provided such that the detection range 90 a becomes around the gateway GW of the room RM. The wireless tag readers 101 to 103 and the antennas 111 to 112 are the same as those of the first embodiment, so detailed description is omitted.”
Datar et al. (US 2022/0327836 A1) discloses in the abstract and para(s). [0188] and [0224]:
[abstract] “An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates images of the items stored on the rack. Over a period of time, a tracking subsystem tracks a pixel position of the wrist of a person interacting with items stored on the rack. receives image frames of the angled-view images. The tracking subsystem determines whether an item was interacted with by a person and, if so, the identified item is assigned to the person.”
[0188] “The tracking system 100 proceeds to step 614 in response to determining that the number of identified markers 304 is greater than or equal to the predetermined threshold value. Once the tracking system 100 identifies a suitable number of markers 304 on the marker grid 702 , the tracking system 100 then determines a pixel location 402 for each of the identified markers 304 . Each marker 304 may occupy multiple pixels in the frame 302 . This means that for each marker 304 , the tracking system 100 determines which pixel location 402 in the frame 302 corresponds with its (x,y) coordinate 306 in the global plane 104 . In one embodiment, the tracking system 100 using bounding boxes 708 to narrow or restrict the search space when trying to identify pixel location 402 for markers 304 . A bounding box 708 is a defined area or region within the frame 302 that contains a marker 304 . For example, a bounding box 708 may be defined as a set of pixels or a range of pixels of the frame 302 that comprise a marker 304 .”
[0224] “Returning to FIG. 10 at step 1006 , the tracking system 100 determines the object is within the overlap region 1110 between the first sensor 108 and the second sensor 108 . Returning to the example in FIG. 11, the tracking system 100 may compare the first pixel location 402 A for the first person 1106 to the pixels identified in the first adjacency list 1114 A that correspond with the overlap region 1110 to determine whether the first person 1106 is within the overlap region 1110 . The tracking system 100 may determine that the first object 1106 is within the overlap region 1110 when the first pixel location 402 A for the first object 1106 matches or is within a range of pixels identified in the first adjacency list 1114 A that corresponds with the overlap region 1110 . For example, the tracking system 100 may compare the pixel column of the pixel location 402 A with a range of pixel columns associated with the overlap region 1110 and the pixel row of the pixel location 402 A with a range of pixel rows associated with the overlap region 1110 to determine whether the pixel location 402 A is within the overlap region 1110 . In this example, the pixel location 402 A for the first person 1106 is within the overlap region 1110 .”
Fung et al. (US 2017/0116496 A1) discloses in the abstract and para(s). [0053]:
[abstract] “The invention provides a method for instantly recognizing and positioning an object, comprising steps of a) wirelessly searching a wireless identification of the object; b) capturing a plurality of images of the object for each image capture; c) determining a 2D center coordinate (x, y) of the object based on a center coordinate (xw, yw) of the wireless identification of the object; d) transforming the captured images of the object to acquire a 3D pattern of the object, and comparing the 3D pattern of the object with 3D patterns pre-stored; and e) if the 3D pattern of the object matches with a pre-stored 3D pattern, calculating and obtaining a 3D center coordinate (x, y, z) of the object to recognize and position the object The invention also provides a system and a processor enabling the method, and use of the system.”
[0053] “In order to accelerate the positioning, the step s 103 may further comprise a the step of segmenting the captured images into a predetermined number of regions, and determining which region the wireless identification of the object 20 is located. For precise positioning of the object 20 , the step s 103 may further comprise a step of enlarging the determined region around the center coordinate (xw , yw ) of the wireless identification of the object 20 , and determining the 2D center coordinate (x, y) of the object 20 in the enlarged region according to the center coordinate (xw , yw ), as shown in FIGS. 4 and 5. In these figures, the circle represents where the identification of the object 20 to be searched is located in the segmented regions. The 2D center coordinate (x, y) of the object 20 would be determined in the enlarged regions around the center coordinate (xw , yw ) of the identification of the object 20 . It is possible to search the 2D center coordinate (x, y) of the object 20 using various methods, such as Fast nearest-neighbor algorithm, and using a pre-stored default distance between the center coordinate (xw , yw ) of the identification and the 2D center coordinate (x, y) of the object 20 , or determine the 2D center coordinate (x, y) of the object 20 according to the pre-stored pattern of the object 20 .”
Eto et al. (US 2018/0178386 A1) discloses in the abstract and para(s). [0164]:
[abstract] “According to one embodiment, a conveying device includes a controller that is configured to determine a moving direction of a holder holding a first object based on a state of overlapping between the first object and a second object viewed in a conveying direction of the first object in a case where the second object is positioned in the conveying direction of the first object with respect to the first object.”
[0164] “The database DB stores information on a plurality of objects M including the first object, the second object, and the third object and information on an obstacle (for example, the pole P). That is, the “information on the object” mentioned in the specification is not limited to information detected during conveying of the object M and may be information given in advance. For example, the “information on the object” stored in the database DB may include at least one of a camera video, cargo tag information, and trajectory information of a loading robot when shipping of the object M is created (for example, when the object M is collected or loaded). The camera video is a video from which the stacking state of a plurality of objects M can be understood, such as a video in which the process of stacking the plurality of objects M is captured, for example The cargo tag information is information stored in an IC tag (for example, a radio frequency identifier (RFID)) attached to each object M, for example. The cargo tag information may include size information of an object M and information indicating a stacking position of the object M or a stacking order of the object M, for example. The trajectory information of the robot may include position information and height information of the robot arm when each object M is stacked and information on the order of stacked objects M. The controller 15 can predict the stacking state of a plurality of objects M which have been stacked and conveyed by obtaining such information from the database DB with the aid of the information acquirer 110 .”
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.Z.Y./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666