DETAILED ACTION
Claim 7 has been cancelled.
Claims 1-6 and 8-20 are currently pending.
Response to Arguments
Applicant’s arguments with respect to claims 1-6 and 8-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 6 recites “the first trigger of object recognition for the first item is an identification, by the MV computing device, of a first label associated with the first item”. Claim 6 is dependent on claim 1, which recites “wherein the first trigger of object recognition for the first item is an identification, by the MV computing device, of an object resembling the first item”. It is unclear how the first trigger of object recognition can be both (1) identification of an object resembling the first item (as recited in claim 1), and identification of a first label associated with the first item (as recited by claim 6).
Claim 8 recites “a machine learning model (MLM)” in line 2. Claim 8 dependents on claim 1 which recites “a machine learning model (MLM)” in line 2. It is unclear whether the recitation of “a machine learning model (MLM)” in claim 8 is the same as “a machine learning model (MLM)” recited in claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6, 8, 9, 11-14 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shakes et al. US Patent 7,769,221 (hereafter “Shakes”) and Sinha et al US Publication 2023/0051146 (hereafter “Sinha”).
Referring to claim 1, Shakes discloses a system comprising a dispatch module, the dispatch module comprising:
one or more processors operating configured to recognize objects in images captured by a machine vision (MV) computing device (col. 6, lines 33-59, an order fulfillment center may include one or more cameras or other image capture devices 310 configured to capture images of order processing at one or more processing stations, such as sorting stations 50, packing stations 60, and shipping stations 70); and
a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors (col. 25, lines 25-27, In the illustrated embodiment, computer system 1300 may include one or more processors 1310 coupled to a system memory 1320), are configured to cause the dispatch module to:
receive a dispatch request for a first item of a plurality of items, wherein the plurality of items includes one or more auxiliary items (col. 13, lines 19-34, As illustrated by block 400, one or more items for an order may arrive at a packing station);
monitor the plurality of items after receiving the dispatch request for the first item, wherein one or more auxiliary items pass in view of the MV computing device before the first item is in view of the MV computing device (col. 13, lines 19-34, In other embodiments, however, various conveyance means, such as conveyor belts, may be used to deliver the items to the packing station);
receive, from the MV computing device, a first trigger of object recognition for the first item (col. 13, lines 35-60, In other embodiments, image or video capturing may be initiated automatically by control system 300, or another computer system configured to do so. For example, the order fulfillment center may include one or more motion detection devices in and around the packing station configured to detect the arrival of items for processing and/or packaging);
initiate, in response to receiving the first trigger of object recognition, a first video processing session to save a first video captured by the MV computing device, the first video comprising a process of packaging the first item (col. 13, lines 19-34, After the items for an order have arrived at the packing station, a control system, such as control system 300, may start capturing images of the packaging of the one or more items in a package, as illustrated by block 410);
receive, from the MV computing device, a second trigger of label recognition associated with the first item, wherein the second trigger of label recognition is an identification, by the MV computing device, of a label associated with the first item (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order);
discontinue, in response to receiving the second trigger of label recognition, the first video processing session (col. 14, lines 30-45, Detecting a package leaving the processing station, as illustrated by block 420, may signal the stopping of image capturing for the package and order, as illustrated by block 430);
receive, from the MV computing device and subsequent to discontinuing the first video processing session, a third trigger of label recognition associated with the first item, the third trigger being associated with a transport request (col. 7, lines 46-65, Detecting an RFID may trigger the capturing of visual verification data characteristic of the particular stage of order processing. For example, an item may be detected, (e.g. by detecting an RFID, reading a scan code, or visually by processing personnel) upon arrival at a particular stage of order processing, and one or more types of data (e.g. images, audio, environmental, timing, etc) may be capture); and
initiate, in response to receiving the third trigger, a second video processing session to save a second video captured by the MV computing device, the second video comprising a process of finalizing the transport request (col. 3-4, lines 61-67, 1-9, Different data captured devices may be used to capture data in different stages of item or order processing. For example, cameras may be used to capture images of the sorting, packing and shipping of an item, in one embodiment).
While Shakes discloses initiating a first video processing session, Shakes does not disclose wherein the first trigger of object recognition for the first item is an identification, by the MV computing device, of an object resembling the first item.
Sinha discloses one or more processors operating a machine learning model (MLM) configured to recognize objects in images captured by a machine vision (MV) computing device (paragraph 29, To perform object recognition tasks (e.g., object classification and possibly other tasks, such as object localization), the image processing unit 136 utilizes one or more convolutional neural networks (CNNs) 140 stored in the memory 130);
monitor the plurality of items after receiving the dispatch request for the first item, wherein one or more auxiliary items pass in view of the MV computing device before the first item is in view of the MV computing device (paragraph 69, At block 902, the REMT application 132 receives video frames (e.g., an arbitrary but consecutive segment of the full set of frames received at stage 602));
receive, from the MV computing device, a first trigger of object recognition for the first item amongst the plurality of items monitored, wherein the first trigger of object recognition for the first item is an identification, by the MV computing device, of an object resembling the first item (paragraph 69, at block 904, the image processing unit 136 (using a CNN 140) performs object recognition to detect events depicted by the frames. If the image processing unit 136 detects an event in a suspect class at block 904, flow proceeds to block 906);
initiate, in response to receiving the first trigger of object recognition, a first video processing session to save a first video captured by the MV computing device (paragraph 71, at block 908 the REMT application 132 causes a corresponding event to be stored in the event database 144).
Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to initiate a process to save a first video when an object is recognized. The motivation for doing so would have been to automate the process of saving important video in order to reduce the operations required by a human. Therefore, it would have been obvious to combine Sinha with Shakes to obtain the invention as specified in claim 1.
Referring to claim 2, Shakes discloses wherein the instructions are further configured to cause the dispatch module to:
receive, from a transport system (col. 13, lines 19-34, In other embodiments, the method illustrated in FIG. 4A may also be performed at any of a number of processing stations in a materials handling facility, such as a sorting station, packing station, quality assurance station, and/or shipping station, among others), an indication of receipt of the first item (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order);
discontinue, in response to receiving the indication of receipt of the first item, the second video processing session (col. 14, lines 30-45, Detecting a package leaving the processing station, as illustrated by block 420, may signal the stopping of image capturing for the package and order, as illustrated by block 430) (col. 19-20, lines 60-67, 1-13, FIG. 8C illustrates one embodiment of a webpage displaying captured images of an order).
Referring to claim 6, Shakes discloses wherein:
the first trigger of object recognition for the first item is an identification, by the MV computing device, of a first label associated with the first item (col. 13, lines 35-60, In some embodiments, an identification code reader, such as a scan-code reader, may also serve as a manual image capture trigger); and
the second trigger of label recognition is an identification, by the MV computing device, of a second label associated with the dispatch request (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order).
Referring to claim 8, Sinha discloses wherein the instructions are further configured to cause the dispatch module to train a machine learning model (MLM) to identify objects captured by the MV computing device and associate them with one or more items, and wherein MLM learns in response to receiving the second trigger of label recognition, thereby indicating that the object resembles the first item (paragraph 31, The training database 142 includes images (e.g., video frames), and corresponding labels, that the computing system 102 (or another computing system not shown in FIG. 1) may use to train the CNN(s) 140).
Referring to claim 9, Shakes discloses wherein the second trigger of label recognition and the third trigger of label recognition are identifications, by the MV computing device, of a label associated with the dispatch request (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order) (col. 7, lines 46-65, Detecting an RFID may trigger the capturing of visual verification data characteristic of the particular stage of order processing. For example, an item may be detected, (e.g. by detecting an RFID, reading a scan code, or visually by processing personnel) upon arrival at a particular stage of order processing, and one or more types of data (e.g. images, audio, environmental, timing, etc) may be capture).
Referring to claim 11, Shakes discloses a system comprising:
a machine vision (MV) computing device comprising a camera (col. 6, lines 33-59, an order fulfillment center may include one or more cameras or other image capture devices 310 configured to capture images of order processing at one or more processing stations, such as sorting stations 50, packing stations 60, and shipping stations 70) configured to:
scan for a first trigger of object recognition of a first item amongst a plurality of items in view of the MV computing device (col. 13, lines 35-60, In some embodiments, an identification code reader, such as a scan-code reader, may also serve as a manual image capture trigger);
capture a first video comprising a process of packaging the first item (col. 13, lines 19-34, After the items for an order have arrived at the packing station, a control system, such as control system 300, may start capturing images of the packaging of the one or more items in a package, as illustrated by block 410); and
scan for a second trigger of label recognition (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order); and
a dispatch module comprising:
one or more processors; and
a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors (col. 25, lines 25-27, In the illustrated embodiment, computer system 1300 may include one or more processors 1310 coupled to a system memory 1320), are configured to cause the dispatch module to:
receive a dispatch request for the first item (col. 13, lines 19-34, As illustrated by block 400, one or more items for an order may arrive at a packing station);
monitor the plurality of items after receiving the dispatch request for the first item, wherein one or more auxiliary items pass in view of the MV computing device before the first item is in view of the MV computing device (col. 13, lines 19-34, In other embodiments, however, various conveyance means, such as conveyor belts, may be used to deliver the items to the packing station);
receive, from the MV computing device, the first trigger of object recognition (col. 13, lines 19-34, After the items for an order have arrived at the packing station, a control system, such as control system 300, may start capturing images of the packaging of the one or more items in a package, as illustrated by block 410);
initiate, in response to receiving the first trigger of object recognition, a first video processing session to save the first video (col. 13, lines 19-34, After the items for an order have arrived at the packing station, a control system, such as control system 300, may start capturing images of the packaging of the one or more items in a package, as illustrated by block 410);
receive, from the MV computing device, the second trigger of label recognition (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order); and
discontinue, in response to receiving the second trigger, the first video processing session (col. 14, lines 30-45, Detecting a package leaving the processing station, as illustrated by block 420, may signal the stopping of image capturing for the package and order, as illustrated by block 430).
Shakes does not disclose expressly wherein the first trigger of object recognition is identification of an object resembling the first item completed by comparing the first item to a plurality of object images stored in a database.
Sinha discloses scan for a first trigger of object recognition of a first item, wherein the first trigger of object recognition is identification of an object resembling the first item completed by comparing the first item to a plurality of object images stored in a database (paragraph 85, At decision block 1207 it is determined if there is a match of the product using the AI image recognition. If so, the product information is provided to the user at step 1208).
Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to initiate a process to save a first video when an object is recognized. The motivation for doing so would have been to automate the process of saving important video in order to reduce the operations required by a human. Therefore, it would have been obvious to combine Sinha with Shakes to obtain the invention as specified in claim 11.
Referring to claim 12, Shakes discloses wherein the camera is further configured to capture a second video captured by the MV computing device, the second video comprising a process of finalizing a transport request (col. 3-4, lines 61-67, 1-9, Different data captured devices may be used to capture data in different stages of item or order processing. For example, cameras may be used to capture images of the sorting, packing and shipping of an item, in one embodiment), and wherein the instructions are further configured to cause the dispatch module to:
receive, from the MV computing device and subsequent to discontinuing the first video processing session, a third trigger of label recognition associated with the first item, the third trigger being associated with the transport request (col. 7, lines 46-65, Detecting an RFID may trigger the capturing of visual verification data characteristic of the particular stage of order processing. For example, an item may be detected, (e.g. by detecting an RFID, reading a scan code, or visually by processing personnel) upon arrival at a particular stage of order processing, and one or more types of data (e.g. images, audio, environmental, timing, etc) may be capture); and
initiate, in response to receiving the third trigger, a second video processing session to save a second video captured by the MV computing device, the second video comprising a process of finalizing the transport request (col. 3-4, lines 61-67, 1-9, Different data captured devices may be used to capture data in different stages of item or order processing. For example, cameras may be used to capture images of the sorting, packing and shipping of an item, in one embodiment).
Referring to claim 13, Shakes discloses wherein the second trigger of label recognition and the third trigger of label recognition are identifications, by the MV computing device, of a label associated with the dispatch request (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order) (col. 7, lines 46-65, Detecting an RFID may trigger the capturing of visual verification data characteristic of the particular stage of order processing. For example, an item may be detected, (e.g. by detecting an RFID, reading a scan code, or visually by processing personnel) upon arrival at a particular stage of order processing, and one or more types of data (e.g. images, audio, environmental, timing, etc) may be capture).
Referring to claim 14, Shakes discloses wherein the instructions are further configured to cause the dispatch module to:
receive, from a transport system, an indication of receipt of the first item; and
discontinue, in response to receiving the indication of receipt of the first item, the second video processing session (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order).
Referring to claim 17, Sinha discloses wherein the instructions are further configured to cause the dispatch module to train a machine learning model (MLM) to identify objects captured by the MV computing device and associate them with one or more items, and wherein MLM learns in response to receiving the second trigger of label recognition, thereby indicating that the object resembles the first item (paragraph 31, The training database 142 includes images (e.g., video frames), and corresponding labels, that the computing system 102 (or another computing system not shown in FIG. 1) may use to train the CNN(s) 140).
Referring to claim 18, Shakes discloses a system comprising:
a first machine vision (MV) computing device comprising a first camera (col. 6, lines 33-59, an order fulfillment center may include one or more cameras or other image capture devices 310 configured to capture images of order processing at one or more processing stations, such as sorting stations 50, packing stations 60, and shipping stations 70) configured to:
monitor a plurality of items including a first item and one or more auxiliary items, wherein one or more of the auxiliary items pass in view of the MV computing device before the first item is in view of the MV computing device (col. 13, lines 19-34, In other embodiments, however, various conveyance means, such as conveyor belts, may be used to deliver the items to the packing station);
scan for a first trigger of object recognition of the first item from amongst the plurality of tiems (col. 13, lines 35-60, In some embodiments, an identification code reader, such as a scan-code reader, may also serve as a manual image capture trigger);
capture a first video comprising a process of packaging the first item (col. 13, lines 19-34, After the items for an order have arrived at the packing station, a control system, such as control system 300, may start capturing images of the packaging of the one or more items in a package, as illustrated by block 410); and
scan for a second trigger of label recognition (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order); and
a second MV computing device comprising a second camera (col. 3-4, lines 61-67, 1-9, Different data captured devices may be used to capture data in different stages of item or order processing. For example, cameras may be used to capture images of the sorting, packing and shipping of an item, in one embodiment) configured to:
scan for a third trigger of label recognition associated with the first item, the third trigger being associated with a transport request (col. 7, lines 46-65, Detecting an RFID may trigger the capturing of visual verification data characteristic of the particular stage of order processing. For example, an item may be detected, (e.g. by detecting an RFID, reading a scan code, or visually by processing personnel) upon arrival at a particular stage of order processing, and one or more types of data (e.g. images, audio, environmental, timing, etc) may be capture); and
capture a second video comprising a process of finalizing the transport request (col. 3-4, lines 61-67, 1-9, Different data captured devices may be used to capture data in different stages of item or order processing. For example, cameras may be used to capture images of the sorting, packing and shipping of an item, in one embodiment); and
a dispatch module comprising:
one or more processors; and a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors (col. 25, lines 25-27, In the illustrated embodiment, computer system 1300 may include one or more processors 1310 coupled to a system memory 1320), are configured to cause the dispatch module to:
receive a dispatch request for the first item (col. 13, lines 19-34, As illustrated by block 400, one or more items for an order may arrive at a packing station);
receive, from the first MV computing device, the first trigger of object recognition (col. 13, lines 19-34, After the items for an order have arrived at the packing station, a control system, such as control system 300, may start capturing images of the packaging of the one or more items in a package, as illustrated by block 410);
initiate, in response to receiving the first trigger of object recognition, a first video processing session to save the first video (col. 13, lines 19-34, After the items for an order have arrived at the packing station, a control system, such as control system 300, may start capturing images of the packaging of the one or more items in a package, as illustrated by block 410);
receive, from the first MV computing device, the second trigger of label recognition (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order);
discontinue, in response to receiving the second trigger, the first video processing session (col. 14, lines 30-45, Detecting a package leaving the processing station, as illustrated by block 420, may signal the stopping of image capturing for the package and order, as illustrated by block 430);
receive, from the second MV computing device and subsequent to discontinuing the first video processing session, the third trigger of label recognition (col. 7, lines 46-65, Detecting an RFID may trigger the capturing of visual verification data characteristic of the particular stage of order processing. For example, an item may be detected, (e.g. by detecting an RFID, reading a scan code, or visually by processing personnel) upon arrival at a particular stage of order processing, and one or more types of data (e.g. images, audio, environmental, timing, etc) may be capture); and
initiate, in response to receiving the third trigger of label recognition, a second video processing session to save the second video (col. 3-4, lines 61-67, 1-9, Different data captured devices may be used to capture data in different stages of item or order processing. For example, cameras may be used to capture images of the sorting, packing and shipping of an item, in one embodiment).
While Shakes discloses initiating a first video processing session, Shakes does not disclose wherein the first trigger of object recognition for the first item is an identification, by the MV computing device, of an object resembling the first item.
Sinha discloses a first machine vision (MV) computing device comprising a first camera configure to:
monitor a plurality of items including a first item and one or more auxiliary items, wherein one or more of the auxiliary items pass in view of the MV computing device before the first item is in view of the MV computing device (paragraph 69, At block 902, the REMT application 132 receives video frames (e.g., an arbitrary but consecutive segment of the full set of frames received at stage 602));
scan for a first trigger of objection recognition of the first item from amongst the plurality of items, wherein the first trigger of object recognition for the first item is an identification, by the MV computing device, of an object resembling the first item (paragraph 69, at block 904, the image processing unit 136 (using a CNN 140) performs object recognition to detect events depicted by the frames. If the image processing unit 136 detects an event in a suspect class at block 904, flow proceeds to block 906); and
a memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors, are configured to cause the dispatch module to:
receive, from the first MV computing device, the first trigger of object recognition (paragraph 69, at block 904, the image processing unit 136 (using a CNN 140) performs object recognition to detect events depicted by the frames. If the image processing unit 136 detects an event in a suspect class at block 904, flow proceeds to block 906);
initiate, in response to receiving the first trigger of object recognition, a first video processing session to save the first video (paragraph 71, at block 908 the REMT application 132 causes a corresponding event to be stored in the event database 144).
Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to initiate a process to save a first video when an object is recognized. The motivation for doing so would have been to automate the process of saving important video in order to reduce the operations required by a human. Therefore, it would have been obvious to combine Sinha with Shakes to obtain the invention as specified in claim 18.
Referring to claim 19, Shakes discloses wherein the instructions are further configured to cause the dispatch module to:
receive, from a transport system (col. 13, lines 19-34, In other embodiments, the method illustrated in FIG. 4A may also be performed at any of a number of processing stations in a materials handling facility, such as a sorting station, packing station, quality assurance station, and/or shipping station, among others), an indication of receipt of the first item (col. 14, lines 46-64, In yet other embodiments, packing personnel may manually signal the completion of order processing for an order via one or more manual switches, such as buttons, levers, foot pedals, scan code readers, etc. For example, after processing an order, a processing agent may use a scan code reader to read an identification code on the packed and sealed order and control system 300 may receive the identification code and information indicating the completion of order processing for that order);
discontinue, in response to receiving the indication of receipt of the first item, the second video processing session (col. 14, lines 30-45, Detecting a package leaving the processing station, as illustrated by block 420, may signal the stopping of image capturing for the package and order, as illustrated by block 430) (col. 19-20, lines 60-67, 1-13, FIG. 8C illustrates one embodiment of a webpage displaying captured images of an order).
Claims 3-5, 10, 15, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shakes et al. US Patent 7,769,221 and Sinha et al US Publication 2023/0051146 as applied to claims 2 and 19 above, and further in view of well known prior art.
Referring to claims 3 and 20, Shakes discloses wherein the instructions are further configured to cause the dispatch module to combine the first video and the second video to maintain in a database (col. 5, lines 5-23, the customer may receive (or be given access to) one or more video clips including short segments showing the customer's order being processed).
While Shakes discloses combine the first video and the second video, Shakes does not disclose expressly combining the first video and the second video into a single video file.
Official Notice is taken that it is well known and obvious in the art to combine multiple videos into a single video (See MPEP 2144.03). The motivation for doing so would have been to reduce the amount of files being sent so that a user can more easily open and view the entirety of the videos. Therefore, it would have been obvious to combine well known prior art with Shakes to obtain the invention as specified in claims 3 and 20.
Referring to claim 4, Shakes discloses wherein the instructions are further configured to cause the dispatch module to:
tag the single video file with an order identifier associated with the dispatch request (col. 5, lines 5-23, the customer may receive (or be given access to) one or more video clips including short segments showing the customer's order being processed); and
receive, from a third-party system, a chargeback request (col. 23, lines 33-44, For example, a customer may complain, as illustrated by block 1200, that a particular item was not included in the order); and
transmit, to the third-party system, the single video file tagged with the order identifier in response to receiving the chargeback request (col. 23-24, lines 45-67, 1-10, For instance, a customer service representative may retrieve one or more images associated with the order, as illustrated by block 1220, and review the images to determine the validity of the customer's complaint, as illustrated by block 1240).
Official Notice is taken that it is well known and obvious in the art to combine multiple videos into a single video (See MPEP 2144.03).
Referring to claim 5, Shakes discloses wherein the MV computing device is a camera, the camera being configured to (i) scan for the first trigger of object recognition, (ii) scan for the second trigger of label recognition, and (iii) capture the first video and the second video (col. 3-4, lines 61-67, 1-9, Different data captured devices may be used to capture data in different stages of item or order processing. For example, cameras may be used to capture images of the sorting, packing and shipping of an item, in one embodiment).
Shakes does not disclose expressly wherein the MV computing device is an MV headset with an integrated camera.
Official Notice is taken that it is well known and obvious in the art to obtain images with an MV headset with an integrated camera (See MPEP 2144.03). The motivation for doing so would have been to make the act of obtaining images easier for a user and to obtain high quality close up images. Therefore, it would have been obvious to combine well known prior art with Shakes to obtain the invention as specified in claim 5.
Referring to claim 10, Shakes discloses the second trigger of label recognition and the third trigger of label recognition, but does not disclose expressly wherein the second trigger of label recognition and the third trigger of label recognition are separated temporally by more than one hour.
Official Notice is taken that it is well known and obvious in the art for a shipping process to occur an hour after a packaging process (See MPEP 2144.03). The motivation for doing so would have been to efficiently process packages in a manner that is not overly expensive. Therefore, it would have been obvious to combine well known prior art with Shakes to obtain the invention as specified in claim 10.
Referring to claim 15, Shakes discloses wherein the instructions are further configured to cause the dispatch module to combine the first video and the second video to maintain in a database (col. 5, lines 5-23, the customer may receive (or be given access to) one or more video clips including short segments showing the customer's order being processed).
While Shakes discloses combine the first video and the second video, Shakes does not disclose expressly combining the first video and the second video into a single video file.
Official Notice is taken that it is well known and obvious in the art to combine multiple videos into a single video (See MPEP 2144.03). The motivation for doing so would have been to reduce the amount of files being sent so that a user can more easily open and view the entirety of the videos. Therefore, it would have been obvious to combine well known prior art with Shakes to obtain the invention as specified in claim 15.
Referring to claim 16, Shakes discloses wherein the instructions are further configured to cause the dispatch module to:
tag the single video file with an order identifier associated with the dispatch request; and
receive, from a third-party system, a chargeback request (For example, a customer may complain, as illustrated by block 1200, that a particular item was not included in the order); and
transmit, to the third-party system, the single video file tagged with the order identifier in response to receiving the chargeback request (For instance, a customer service representative may retrieve one or more images associated with the order, as illustrated by block 1220, and review the images to determine the validity of the customer's complaint, as illustrated by block 1240).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER K HUNTSINGER/ Primary Examiner, Art Unit 2682