Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application has an effective filing date of 12/28/2023.
IDS
The following IDS(s) have been considered by the Examiner: 7/8/2005
Examiner Note
Claims 1-20 are pending in the instant application.
Claims 1, 10, and 18 have been amended.
Claims 1-20 have been rejected.
Response to Arguments
The rejection of claims 1-20 under 35 USC 101 has been vacated in view of applicant’s amendments. The newly claimed subject matter is directed to additional elements that integrate the noted judicial exception into a practical application.
The applicant’s arguments regarding the pending prior art rejection is moot in view of the new grounds of rejection. The new grounds of rejection were necessitated by applicant’s claim amendments.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “proximate” in claims 1, 10, and 18 is a relative term which renders the claim indefinite. The term “proximate” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang US2025/0020483 A1 in view of Flint US11244547.
Referring to claim 1, Wang discloses a computer-implemented method for providing an item search service for a premises comprising a set of Internet of Things (IoT) cameras (Wang, Fig. 2, “204”), comprising:
receiving, by at least one computer processor and via a user interface of the item search service, first user input regarding an item of interest, wherein the first user input comprises one or more of speech input or text input (Wang [0062] “The user may then use voice input, a hard or soft keyboard, or other input means to enter the user-designated label into the input field 330. The user may then select the save selector 332 to save the entered label and apply the label to the object in the map 300 itself so that the map/map metadata indicates the user-designated label for the associated object. In the present instance, the user has labeled object “A” as “car keys”.);
accessing a plurality of images of the premises captured by the set of IoT cameras (Wang [0063] “…The prompt 400 may further instruct the user to press the start selector 410 and then show the keys to the user's smartphone camera (or another device camera) from different angles. Thus, the user might select the start selector 410 and then hold the keys up to his/her smartphone camera and then rotate the keys around 360 degrees in both the horizontal and vertical planes within the camera's field of view for the system/smartphone to generate a 3D point cloud of the keys for recognition of the keys in the future (e.g., inclusion of those 3D points/features into the map 300 itself).”);
executing a machine learning model to identify one or more images in the plurality of images that include the item of interest based at least on the first user input (Wang [0025] “Additionally, object recognition and/or other artificial intelligence (AI)-based systems can be used with the scanning process such that objects may be identified from a data training set as part of the scan and then labelled accordingly inside the space/map to achieve semantic understanding. Thus, the AI-enhanced semantic map may not only create a 3D feature-rich map but also contain data like instances of objects recognized, their names, and their respective locations inside the mappable space. Utilizing semantic understanding, a device may thus be used to track and place objects relevant to the user.”);
generating an item search result based on the identified one or more images (Wang [0069] and Fig. 7); and
providing the item search result via the user interface of the item search service (Wang [0069] and Fig. 7).
Wang does not disclose, but Flint discloses in a similar field of endeavor:
identifying a smart light located in a premises in which the item of interest is located and that is proximate to the item of interest (Flint [column 1-2] A method for a monitoring system is described. The method may be performed by a computing device including at least one processor, such as a camera-enabled device. The method may include monitoring a physical environment using the camera-enabled device, detecting a trigger in the physical environment based on the monitoring, where the trigger includes an object, a person, an event, or any combination thereof, selecting, based on the detecting, a direction of a set of directions to emit light via a light emitting source, activating the light emitting source based on the selecting, and emitting, via the light emitting source, the light in the direction based on the activating and the detected trigger.); and
causing the smart light to turn on to assist a user to locate the item of interest with the premises (Flint [column 1-2] A method for a monitoring system is described. The method may be performed by a computing device including at least one processor, such as a camera-enabled device. The method may include monitoring a physical environment using the camera-enabled device, detecting a trigger in the physical environment based on the monitoring, where the trigger includes an object, a person, an event, or any combination thereof, selecting, based on the detecting, a direction of a set of directions to emit light via a light emitting source, activating the light emitting source based on the selecting, and emitting, via the light emitting source, the light in the direction based on the activating and the detected trigger.).
This noted feature in Flint is applicable to the method and system of Wang as both prior art references share characteristics and capabilities, namely, they are directed to storage management. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang as described above because it may deter an intruder from an intended action (e.g., theft, property damage, etc..). See Flint columns 1-2.
Referring to claim 2, Wang discloses a computer-implemented method of claim 1, wherein the first user input comprises natural language input (Wang [0094] …For instance, an audible prompt may be presented to label a given object by current location rather than character, and then a user may provide an audible response as detected via a microphone and processed using speech recognition and natural language understanding to then apply a location-based label indicated in the audible input.) and wherein the machine learning model comprises a multimodal machine learning model trained on a set of images and natural language text respectively associated with each image in the set of images (Wang [0025] “Additionally, object recognition and/or other artificial intelligence (AI)-based systems can be used with the scanning process such that objects may be identified from a data training set as part of the scan and then labelled accordingly inside the space/map to achieve semantic understanding. Thus, the AI-enhanced semantic map may not only create a 3D feature-rich map but also contain data like instances of objects recognized, their names, and their respective locations inside the mappable space. Utilizing semantic understanding, a device may thus be used to track and place objects relevant to the user.”).
Referring to claim 3, Wang discloses a computer-implemented method of claim 1, further comprising: receiving, via the user interface of the item search service, second user input that specifies an image of the item of interest and a label assigned to the item of interest by a user (Wang [Claim 7] The first device of claim 1, wherein the instructions are executable to: present a user interface (UI), the UI indicating an object in the semantic map that has not been identified via object recognition; receive user input indicating a label for the object; and update the semantic map with the label.); and
utilizing the image of the item of interest and the label assigned to the item of interest to train the machine learning model (Wang [0025] “Additionally, object recognition and/or other artificial intelligence (AI)-based systems can be used with the scanning process such that objects may be identified from a data training set as part of the scan and then labelled accordingly inside the space/map to achieve semantic understanding…”).
Referring to claim 4, Wang discloses a computer-implemented method of claim 1, further comprising: performing the receiving, accessing, executing, generating and providing steps on one or more devices located within the premises (Wang [Fig. 7]).
Referring to claim 5, Wang discloses a computer-implemented method of claim 1, further comprising: selecting the machine learning model from among a plurality of different machine learning models, wherein each machine learning model of the plurality of different machine learning models is trained or fine-tuned for one of a particular premises type or a particular user demographic (Wang [0027] “ Objects added pre-mapping may be used to improve an already-created database of objects that can be recognized, and objects scanned and labeled during mapping can be added in real time to the semantic map as it forms. Objects added to the database after the semantic map has been created can further update the semantic understanding of the existing map.”).
Referring to claim 6, Wang does not disclose, but Flint discloses a computer-implemented method of claim 1, further comprising:
authenticating a user of the item search service (Flint [column 3-4] “Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for capturing an image or a video of the person using the camera-enabled device, performing an identification operation based on the captured image or video, where the identification operation includes a facial recognition, a license plate recognition, a time sequenced analysis, or any combination thereof, determining, based on the identifying, that the person may be a known user associated with the monitoring system, where emitting the light includes, and emitting, via the light emitting source, the light in each of the set of directions based on the person being the known user associated with the monitoring system.”); and
determining that the user is an authorized user of the item search service based on the authenticating; wherein one or more of the receiving, accessing, executing, generating and providing is performed in response to the determining that the user is the authorized user of the item search service (Flint [column 3-4] “Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for capturing an image or a video of the person using the camera-enabled device, performing an identification operation based on the captured image or video, where the identification operation includes a facial recognition, a license plate recognition, a time sequenced analysis, or any combination thereof, determining, based on the identifying, that the person may be a known user associated with the monitoring system, where emitting the light includes, and emitting, via the light emitting source, the light in each of the set of directions based on the person being the known user associated with the monitoring system.”).
This noted feature in Flint is applicable to the method and system of Wang as both prior art references share characteristics and capabilities, namely, they are directed to storage management. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang as described above because it may deter an intruder from an intended action (e.g., theft, property damage, etc..). See Flint columns 1-2.
Referring to claim 7, Wang discloses the computer-implemented method of claim 1, further comprising: receiving, via the user interface of the item search service, second user input that specifies an item that should not be searchable; and in response to receiving the second user input, applying a content filter that prevents the item search service from searching for the item that should not be searchable or that prevents the item search service from returning an item search result for the item that should not be searchable (Wang [0061] The user may thus pick and choose some or all of the objects from the list for which to provide/generate labels. Note that the user might only choose to label objects that the user considers important in certain non-limiting embodiments, since labeling each and every computer-unknown object from a given area might be tedious and not altogether necessary.).
Referring to claim 8, Wang discloses executing a machine learning model (Wang [0025] “Thus, the AI-enhanced semantic map may not only create a 3D feature-rich map but also contain data like instances of objects recognized, their names, and their respective locations inside the mappable space. Utilizing semantic understanding, a device may thus be used to track and place objects relevant to the user.”). Wang does not disclose but Flint discloses the computer-implemented method of claim 1, further comprising:
determining an identity of a user of the item search service (Flint [column 16], “The device 355 may perform one or more actions based on the detected trigger, the determined identity, one or more operation modes, or any combination thereof. For example, the device 355 may be configured to operate in various modes. The device 355 may be configured to operate in a first mode (also referred to as a security mode or an armed state) during pre-configured hours of a day, based on an input from an occupant of the building 350, among other examples. The device 355 may determine actions to perform upon detection of a trigger depending on a current operation mode of the device 355, an identity of a detected person, a categorization of the event, object, or person, or any combination thereof. As an illustrative example, the device 355 may emit a light or a sound, begin recording video or taking images, notify a user or authorities, or a combination thereof based on detecting a trigger.”);
identifying the one or more images in the plurality of images that include the item of interest based at least on the first user input and the identity of the user (Flint [column 16], “The device 355 may perform one or more actions based on the detected trigger, the determined identity, one or more operation modes, or any combination thereof. For example, the device 355 may be configured to operate in various modes. The device 355 may be configured to operate in a first mode (also referred to as a security mode or an armed state) during pre-configured hours of a day, based on an input from an occupant of the building 350, among other examples. The device 355 may determine actions to perform upon detection of a trigger depending on a current operation mode of the device 355, an identity of a detected person, a categorization of the event, object, or person, or any combination thereof. As an illustrative example, the device 355 may emit a light or a sound, begin recording video or taking images, notify a user or authorities, or a combination thereof based on detecting a trigger.”).
This noted feature in Flint is applicable to the method and system of Wang as both prior art references share characteristics and capabilities, namely, they are directed to storage management. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang as described above because it may deter an intruder from an intended action (e.g., theft, property damage, etc..). See Flint columns 1-2.
Referring to claim 9, Wang discloses the computer-implemented method of claim 1, wherein generating the item search result based on the identified one or more images comprises one or more of: generating a speech or text description of a location of the item of interest based on the identified one or more images; or generating an image that shows the location of the item of interest based on the identified one or more images (Wang, “[0094] Also before concluding, note that objects may be labeled audibly as well as through a GUI if desired. For instance, an audible prompt may be presented to label a given object by current location rather than character, and then a user may provide an audible response as detected via a microphone and processed using speech recognition and natural language understanding to then apply a location-based label indicated in the audible input.”).
Claims 10-20 are rejected under the same rationale as set forth above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW S GART whose telephone number is (571)272-3955. The examiner can normally be reached M-F 8:30AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Deborah Reynolds can be reached at 571-272-0734. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW S GART/Supervisory Patent Examiner, Art Unit 3696