Prosecution Insights
Last updated: April 19, 2026
Application No. 18/657,674

RADIO FREQUENCY IDENTIFICATION AND MACHINE LEARNING FOR CLOTHING IDENTIFICATION

Non-Final OA §101§103
Filed
May 07, 2024
Examiner
LIU, XIAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Toshiba Global Commerce Solutions, Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
257 granted / 290 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
44 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/07/2024 has/have been considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because computer-readable storage media may include transitory medium which do not fall within at least one of four categories of patent eligible subject matter. Claims 15-20 are also drawn to a computer program for carrying out the instructions/functionality of the claimed invention, which is no more than just a software computer program (i.e. software per se). The software computer program is non-statutory since it cannot be interpreted to fall into any of the four patentable categories of process, machine, manufacture, or composition of matter. If Applicant is going to change the claims to cover the computer-readable medium, Applicant is suggested to exclude transitory embodiments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chakravarty et al (US 11361536 B2), hereinafter Chakravarty in view of Appleboim et al (US 20210090209 A1), hereinafter Appleboim, and further in view of モハンマド et al (JP 2021513126 A), hereinafter JP-art, in view of Hussain et al (WO 2024156669 A1), hereinafter Hussain. -Regarding claim 1, Chakravarty discloses a method, comprising (Abstract; FIGS. 1-10): accessing a first set of images depicting a user selecting an item (FIG. 1, module 102; Col. 2, lines 46-57, “registering an identity of a person who visits an area designated for holding objects, capturing an image … an object in the version of the image, associating the registered identity of the person …”; Col. 6, lines 44-55; FIG. 8, step 806); generating a first predicted type of the item based on processing at least one of the first set of images using a machine learning model (FIG. 1, DNN’ FIG. 2, object detection; FIG. 3, results 320; Col.7, lines 7-8, “identify the type of object detected”; Col. 9, lines 20-23; FIGS. 4-9); identifying, using a barcode on the item (Col. 6, lines 27-30); and training the machine learning model based on images provided in an image database (FIG. 1, DNN 112, image DB 122; FIG. 2, neural network training; FIG. 10; Col. 9, lines 60-63). Chakravarty does not disclose that the identified item is an item of clothing and Chakravarty does not disclose generating a first predicted size of the item based on processing at least one of the first set of images using a machine learning model. However, Chakravarty has not limitation on the item to be detected. The item can be any object including an item of clothing. In the same field of endeavor, Appleboim teaches a method for transforming an image of an article including clothing for virtual presentation (Appleboim: FIGS. 1-71). Appleboim further teaches generating a first predicted type (Appleboim: FIG. 3, output 28; [0119], “product type”; [0121]) and a first predicted size of the item of clothing based on processing at least one of the first set of images using a machine learning model (FIG. 4, product size estimation 8A; [0125]-[0126]); Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chakravarty with the teaching of Appleboim by generating a first predicted type and a first predicted size of the item of clothing and in order to extract more features of the item of clothing. Chakravarty in view of Appleboim does not teach identifying, using a radio frequency identification (RFID) tag on the item of clothing, a true type and a true size of the item of clothing. Chakravarty in view of Appleboim does teach identifying by using a barcode on the item (Chakravarty: Col. 6, lines 27-30) A person of ordinary skills in the art would understand that using RFID tag to detect a type and a size of an item is a common practice in the field. However, JP-art is an analogous art pertinent to the problem to be solved in this application and teaches a method for realizing a walk-through checkout station including an RFID interrogator equipped with a plurality of antennas (JP-art: FIGS. 1-8). JP-art further identifying, using a radio frequency identification (RFID) tag on the item of clothing, a true type and a true size of the item of clothing (JP-art: FIGS. 1, 2, 4; Page 2, Sec. Description, 3rd paragraph, “RFI D tags are provided in various configurations, sizes, read ranges, memory, volumes, and the like”; Page 4, 2nd paragraph, “RFID detection system 125 includes RFI D tags … provides a product … size, cloth, color, pattern, etc. to a particular shirt”; Page 5, 3rd paragraph, “RFID tag 240 can be marked with an ID unique to each product”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Chakravarty in view of Appleboim with the teaching of JP-art by using RFID in order to obtain more accurate clothing information and for purposes such as anti-theft, inventory checks, etc. (JP-art: Page 4, 2nd paragraph). Chakravarty in view of Appleboim , and further in view of JP-art does not teach using RFID tag information as labeling information or ground truth data to train the machine learning model, i.e., comparing the first predicted type and the first predicted size to the true type and the true size; and training the machine learning model based on the comparison. Hussain is an analogous art pertinent to the problem to be solved in this application and teaches method and a system for determining an activity in an environment and the environment comprises a plurality of passive RFID tags (Hussain: Abstract; FIGS. 1-23). Hussain further teaches using RFID tag information as labeling information or ground truth data to train the machine learning model (Hussain: FIG. 23; Page 10, last paragraph – Page 11, 1st paragraph; Page 22, 2nd paragraph). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Chakravarty in view of Appleboim, and further in view of JP-art with the teaching of Hussain by using RFID tag information as labeling information or ground truth data in order to improve accuracy of machine learning model (Hussain: Page 11, 1st paragraph). -Regarding claim 9, Chakravarty discloses a system, comprising (Abstract; FIGS. 1-10): one or more memories collectively storing computer-executable instructions (FIG. 2, memory 202); and one or more processors (FIG. 2, processor 200) configured to collectively execute the computer-executable instructions and cause the system to perform an operation, comprising (FIG. 2): accessing a first set of images depicting a user selecting an item (FIG. 1, module 102; Col. 2, lines 46-57, “registering an identity of a person who visits an area designated for holding objects, capturing an image … an object in the version of the image, associating the registered identity of the person …”; Col. 6, lines 44-55; FIG. 8, step 806); generating a first predicted type of the item based on processing at least one of the first set of images using a machine learning model (FIG. 1, DNN’ FIG. 2, object detection; FIG. 3, results 320; Col.7, lines 7-8, “identify the type of object detected”; Col. 9, lines 20-23; FIGS. 4-9); identifying, using a barcode on the item (Col. 6, lines 27-30); and training the machine learning model based on images provided in an image database (FIG. 1, DNN 112, image DB 122; FIG. 2, neural network training; FIG. 10; Col. 9, lines 60-63). Chakravarty does not disclose that the identified item is an item of clothing and Chakravarty does not disclose generating a first predicted size of the item based on processing at least one of the first set of images using a machine learning model. However, Chakravarty has not limitation on the item to be detected. The item can be any object including an item of clothing. In the same field of endeavor, Appleboim teaches a method for transforming an image of an article including clothing for virtual presentation (Appleboim: FIGS. 1-71). Appleboim further teaches generating a first predicted type (Appleboim: FIG. 3, output 28; [0119], “product type”; [0121]) and a first predicted size of the item of clothing based on processing at least one of the first set of images using a machine learning model (FIG. 4, product size estimation 8A; [0125]-[0126]); Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chakravarty with the teaching of Appleboim by generating a first predicted type and a first predicted size of the item of clothing and in order to extract more features of the item of clothing. Chakravarty in view of Appleboim does not teach identifying, using a radio frequency identification (RFID) tag on the item of clothing, a true type and a true size of the item of clothing. Chakravarty in view of Appleboim does teach identifying by using a barcode on the item (Chakravarty: Col. 6, lines 27-30) A person of ordinary skills in the art would understand that using RFID tag to detect a type and a size of an item is a common practice in the field. However, JP-art is an analogous art pertinent to the problem to be solved in this application and teaches a method for realizing a walk-through checkout station including an RFID interrogator equipped with a plurality of antennas (JP-art: FIGS. 1-8). JP-art further identifying, using a radio frequency identification (RFID) tag on the item of clothing, a true type and a true size of the item of clothing (JP-art: FIGS. 1, 2, 4; Page 2, Sec. Description, 3rd paragraph, “RFI D tags are provided in various configurations, sizes, read ranges, memory, volumes, and the like”; Page 4, 2nd paragraph, “RFID detection system 125 includes RFI D tags … provides a product … size, cloth, color, pattern, etc. to a particular shirt”; Page 5, 3rd paragraph, “RFID tag 240 can be marked with an ID unique to each product”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Chakravarty in view of Appleboim with the teaching of JP-art by using RFID in order to obtain more accurate clothing information and for purposes such as anti-theft, inventory checks, etc. (JP-art: Page 4, 2nd paragraph). Chakravarty in view of Appleboim , and further in view of JP-art does not teach using RFID tag information as labeling information or ground truth data to train the machine learning model, i.e., comparing the first predicted type and the first predicted size to the true type and the true size; and training the machine learning model based on the comparison. Hussain is an analogous art pertinent to the problem to be solved in this application and teaches method and a system for determining an activity in an environment and the environment comprises a plurality of passive RFID tags (Hussain: Abstract; FIGS. 1-23). Hussain further teaches using RFID tag information as labeling information or ground truth data to train the machine learning model (Hussain: FIG. 23; Page 10, last paragraph – Page 11, 1st paragraph; Page 22, 2nd paragraph). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Chakravarty in view of Appleboim, and further in view of JP-art with the teaching of Hussain by using RFID tag information as labeling information or ground truth data in order to improve accuracy of machine learning model (Hussain: Page 11, 1st paragraph). -Regarding claim 15, Chakravarty discloses computer program product comprising one or more computer-readable storage media (FIG. 2, memory 202) having computer-readable program code collectively embodied therewith, the computer-readable program code collectively executable by one or more computer processors (FIG. 2, processor 200) to perform an operation comprising (Abstract; FIGS. 1-10): accessing a first set of images depicting a user selecting an item (FIG. 1, module 102; Col. 2, lines 46-57, “registering an identity of a person who visits an area designated for holding objects, capturing an image … an object in the version of the image, associating the registered identity of the person …”; Col. 6, lines 44-55; FIG. 8, step 806); generating a first predicted type of the item based on processing at least one of the first set of images using a machine learning model (FIG. 1, DNN’ FIG. 2, object detection; FIG. 3, results 320; Col.7, lines 7-8, “identify the type of object detected”; Col. 9, lines 20-23; FIGS. 4-9); identifying, using a barcode on the item (Col. 6, lines 27-30); and training the machine learning model based on images provided in an image database (FIG. 1, DNN 112, image DB 122; FIG. 2, neural network training; FIG. 10; Col. 9, lines 60-63). Chakravarty does not disclose that the identified item is an item of clothing and Chakravarty does not disclose generating a first predicted size of the item based on processing at least one of the first set of images using a machine learning model. However, Chakravarty has not limitation on the item to be detected. The item can be any object including an item of clothing. In the same field of endeavor, Appleboim teaches a method for transforming an image of an article including clothing for virtual presentation (Appleboim: FIGS. 1-71). Appleboim further teaches generating a first predicted type (Appleboim: FIG. 3, output 28; [0119], “product type”; [0121]) and a first predicted size of the item of clothing based on processing at least one of the first set of images using a machine learning model (FIG. 4, product size estimation 8A; [0125]-[0126]); Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chakravarty with the teaching of Appleboim by generating a first predicted type and a first predicted size of the item of clothing and in order to extract more features of the item of clothing. Chakravarty in view of Appleboim does not teach identifying, using a radio frequency identification (RFID) tag on the item of clothing, a true type and a true size of the item of clothing. Chakravarty in view of Appleboim does teach identifying by using a barcode on the item (Chakravarty: Col. 6, lines 27-30) A person of ordinary skills in the art would understand that using RFID tag to detect a type and a size of an item is a common practice in the field. However, JP-art is an analogous art pertinent to the problem to be solved in this application and teaches a method for realizing a walk-through checkout station including an RFID interrogator equipped with a plurality of antennas (JP-art: FIGS. 1-8). JP-art further identifying, using a radio frequency identification (RFID) tag on the item of clothing, a true type and a true size of the item of clothing (JP-art: FIGS. 1, 2, 4; Page 2, Sec. Description, 3rd paragraph, “RFI D tags are provided in various configurations, sizes, read ranges, memory, volumes, and the like”; Page 4, 2nd paragraph, “RFID detection system 125 includes RFI D tags … provides a product … size, cloth, color, pattern, etc. to a particular shirt”; Page 5, 3rd paragraph, “RFID tag 240 can be marked with an ID unique to each product”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Chakravarty in view of Appleboim with the teaching of JP-art by using RFID in order to obtain more accurate clothing information and for purposes such as anti-theft, inventory checks, etc. (JP-art: Page 4, 2nd paragraph). Chakravarty in view of Appleboim , and further in view of JP-art does not teach using RFID tag information as labeling information or ground truth data to train the machine learning model, i.e., comparing the first predicted type and the first predicted size to the true type and the true size; and training the machine learning model based on the comparison. Hussain is an analogous art pertinent to the problem to be solved in this application and teaches method and a system for determining an activity in an environment and the environment comprises a plurality of passive RFID tags (Hussain: Abstract; FIGS. 1-23). Hussain further teaches using RFID tag information as labeling information or ground truth data to train the machine learning model (Hussain: FIG. 23; Page 10, last paragraph – Page 11, 1st paragraph; Page 22, 2nd paragraph). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Chakravarty in view of Appleboim, and further in view of JP-art with the teaching of Hussain by using RFID tag information as labeling information or ground truth data in order to improve accuracy of machine learning model (Hussain: Page 11, 1st paragraph). -Regarding claims 2, 10, and 16, Chakravarty in view of Appleboim, and further in view of JP-art, in view of Hussain teaches the method of claim 1, the system of claim 9, and the computer program product of claim 15. The modification further teaches the outputting at least one of: (i) the first predicted type and the first predicted size, or (ii) the true type and the true size, via a display (Chakravarty: FIG. 1, display 118; Col. 6, lines 18-30); and receiving, from the user, confirmation of the true type and the true size (Chakravarty: Col. 7, lines 5-16; FIG. 8, step 812). -Regarding claims 3, 11, and 17, Chakravarty in view of Appleboim, and further in view of JP-art, in view of Hussain teaches the method of claim 1, the system of claim 9, and the computer program product of claim 15. The modification further teaches wherein generating the first predicted size comprises evaluating at least one of the first set of images to identify a location from which the user selected the item of clothing (Chakravarty: FIG. 2; Col. 8, lines 34-58, “tracks the locations of detected objects within the holding area … guides humans to … locations in the object-holding area”; See also Appleboim: [0171], “points-of-interest locations identifiers”; JP-art: Page 6, 4th paragraph, “determine the location of RFID tag”). -Regarding claim 4, Chakravarty in view of Appleboim, and further in view of JP-art, in view of Hussain teaches the method of claim 3. The modification further teaches wherein generating the first predicted size comprises generating a predicted range of sizes based on the location (Appleboim: FIGS. 3-4; [0119], “generate images of the same shirt ranging from sizes x-small to x-large … depicting different sizes of a single product”; [0126]; [0171]). -Regarding claims 5, 12, and 18, Chakravarty in view of Appleboim, and further in view of JP-art, in view of Hussain teaches the method of claim 1, the system of claim 9, and the computer program product of claim 15. The modification further teaches wherein generating the first predicted size comprises one or more of: (i) evaluating at least one of the first set of images to infer a size of the user; (ii) evaluating historical item records to infer the true size; or (iii) evaluating one or more social media networks to infer the true size (Appleboim: FIG. 4, product size estimation 8A; [0126], “estimate the recommended size for each product dressed on the user … for each part of a user's body”; FIGS. 34-36). -Regarding claims 6, 13, and 19, Chakravarty in view of Appleboim, and further in view of JP-art, in view of Hussain teaches the method of claim 1, the system of claim 9, and the computer program product of claim 15. The modification further teaches accessing a second set of images depicting a second user selecting a second item of clothing, wherein the second item of clothing does not have an RFID tag; generating a second predicted type and a second predicted size of the second item of clothing based on processing at least one of the second set of images using the updated machine learning model; and outputting the second predicted type and the second predicted size via a display (Chakravarty: FIG. 1; Col. 2, lines 45-57, “submitting a version of the image to a deep neural network trained to detect and recognize objects in images like those objects held in the designated area, detecting an object in the version of the image, associating the registered identity of the person with the detected object” (emphasis added); Note: the recited claim limitations is about using a trained machine learning model to predict type and size of any item of clothing. No RFID tag involved). -Regarding claim 7, Chakravarty in view of Appleboim, and further in view of JP-art, in view of Hussain teaches the method of claim 3. The modification further teaches determining a size of the second user; determining that the second predicted size does not match the size of the second user; and outputting, via the display, an indication that the second predicted size and the size of the second user do not match (Chakravarty: FIG. 1; Col. 6, lines 18 -30, “display 118 may be included in the object-identification system 100 … configured with input/output devices … a physical or virtual keyboard, keypad, barcode scanner, microphone, camera, and may be used to register the identities of persons entering the object-holding area and/or to scan object labels”; Note: a barcode scanner associated with the display performs verification of the predicted product and the verification results will be shown in the display). -Regarding claims 8, 14, and 20, Chakravarty in view of Appleboim, and further in view of JP-art, in view of Hussain teaches the method of claim 1, the system of claim 9, and the computer program product of claim 15. The modification further teaches accessing a second set of images depicting the user exiting a fitting area; and processing at least one of the second set of images using a machine learning model to predict whether the user retained the item of clothing (Chakravarty: FIG. 1; Col. 6, lines 31-56, “During operation of the object-identification system 100, persons arrives at the object-holding area to perform any one or more of at least four object handling activities, including depositing an object, removing an object, moving an object to another spot in the holding area, or alerting personnel of an object warranting inspection. In general, the object-identification system registers the identities of persons who arrive at the holding area (i.e., who interact with the object-identification system) and associates each registered person with one or more objects that the person is handling. Using image processing techniques, the object-identification system continuously monitors and acquires real-time image data of the holding area”; Note: holding area can be any area including a fitting area). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAO LIU/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

May 07, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603972
WIRELESS TRANSMITTER IDENTIFICATION IN VISUAL SCENES
2y 5m to grant Granted Apr 14, 2026
Patent 12592069
OBJECT RECOGNITION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579834
Information Extraction Method and Apparatus for Text With Layout
2y 5m to grant Granted Mar 17, 2026
Patent 12576873
SYSTEM AND METHOD OF CAPTIONS FOR TRIGGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12573175
TARGET TRACKING METHOD, TARGET TRACKING SYSTEM AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month