DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
In the instant case, Claim(s) 16-19 is/are directed to a method and Claim(s) 20 is/are directed to a non-transitory computer-readable storage medium. Therefore, these claims fall within the four statutory categories of invention.
The claim(s) recite(s) the abstract idea of “processing the information and the further information to obtain a result about the first checkout station”. Specifically, the claims recite “acquiring information about a first checkout station with a first edge camera associated with the first checkout station”, a.k.a., the first acquiring step, “acquiring further information about the first checkout station with a second edge camera associated with a second checkout station”, a.k.a., the second acquiring step, “communicating the further information from the second edge camera to the first edge camera”, a.k.a., the communicating step, and “processing the information and the further information to obtain a result about the first checkout station”, a.k.a., the processing step, which is grouped within the “methods of organizing human activity” grouping of abstract ideas in prong one of step 2A of the Alice/Mayo test (See 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, 52, 54 (January 7, 2019)) because observing people at self-checkout stations would be considered marketing or sales activity. See MPEP 2106.04(a)(2)(II)(B). These steps represent Insignificant Extra Solution Activity that would be similar to mere data gathering examples of obtaining information over a network. See MPEP 2106.05(g). The camera limitations are recited at a high level of generality and do not appear to be particular machines. See MPEP 2106.05(b). Additionally, the first and second cameras connected to a network as mentioned in Claim 18 do not appear to be improvements to cameras or networks. See MPEP 2106.05(a). Accordingly, the claims recite an abstract idea (See pages 7, 10, Alice Corporation Pty. Ltd. v. CLS Bank International, et al., US Supreme Court, No. 13-298, June 19, 2014; 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, 53-54 (January 7, 2019)).
This judicial exception is not integrated into a practical application because, when analyzed under prong two of step 2A of the Alice/Mayo test (See 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, 54-55 (January 7, 2019)), the additional element(s) of the claim(s) such as the first and second edge cameras associated with first and second checkout stations in both Claims 16 and 20, the vision mesh network of Claim 18, and the non-transitory computer-readable medium and one or more processors recited in Claim 20 are recited at a high level and with the network merely being a mode of communication recited at a high level and the non-transitory computer-readable medium and one or more processors being a generic computer recited at a high level and merely use(s) a computer as a tool to perform an abstract idea and/or generally link(s) the use of a judicial exception to a particular technological environment. Specifically, the first and second edge cameras associated with first and second checkout stations in both Claims 16 and 20, the vision mesh network of Claim 18, and the non-transitory computer-readable medium and one or more processors recited in Claim 20 are described at a high level and with the network merely being a mode of communication recited at a high level and the non-transitory computer-readable medium and one or more processors being a generic computer performing the steps or functions of the first and second acquiring steps, the communication step and the processing step. The use of a processor/computer as a tool to implement the abstract idea and/or generally linking the use of the abstract idea to a particular technological environment does not integrate the abstract idea into a practical application because it requires no more than a computer performing functions that correspond to acts required to carry out the abstract idea. The additional elements do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply or use the abstract idea to effect a particular treatment or prophylaxis for a disease or medical condition (Vanda Memo), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e) and Vanda Memo). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea, and the claims are directed to an abstract idea.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when analyzed under step 2B of the Alice/Mayo test (See 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, 52, 56 (January 7, 2019)), the additional element(s) of using the first and second edge cameras associated with first and second checkout stations in both Claims 16 and 20, the vision mesh network of Claim 18, and the non-transitory computer-readable medium and one or more processors recited in Claim 20 to perform the steps amounts to no more than using a computer or processor to automate and/or implement the abstract idea of “processing the information and the further information to obtain a result about the first checkout station”. As discussed above, taking the claim elements separately, the first and second edge cameras associated with first and second checkout stations in both Claims 16 and 20, the vision mesh network of Claim 18, and the non-transitory computer-readable medium and one or more processors recited in Claim 20 perform(s) the steps or functions of the first and second acquiring steps, the communication step and the processing step. These functions correspond to the actions required to perform the abstract idea. Viewed as a whole, the combination of elements recited in the claims merely recite the concept of “processing the information and the further information to obtain a result about the first checkout station”. Therefore, the use of these additional elements does no more than employ the computer as a tool to automate and/or implement the abstract idea. The use of a computer or processor to merely automate and/or implement the abstract idea cannot provide significantly more than the abstract idea itself (MPEP 2106.05(I)(A)(f) & (h)). Therefore, the claim is not patent eligible.
Dependent claims 17-19 further describe the abstract idea of “processing the information and the further information to obtain a result about the first checkout station”. The dependent claims do not include additional elements that integrate the abstract idea into a practical application or that provide significantly more than the abstract idea. Therefore, the dependent claims are also not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Whitelaw et al (US 2025/0046161 A1) in view of Carranza et al (US 2019/0043207 A1), and further in view of Landers, Jr., et al (US 2020/0402130 A1).
Regarding Claim 1, Whitelaw teaches a point-of-sale system comprising:
a first checkout station, i.e., checkout terminal (100), as illustrated at figures 1 and 8, at a first location, noting the mention of “a central server that exchanges data with a plurality of checkout terminals” as mentioned at paragraph 69, last sentence;
a plurality of first edge cameras, i.e., cameras (1110, 1111, 1120, 1121, 1130, 1131, 1140, 1141, 1250, 1251, 1260, 1261, 1270, 1280, 1290, 1210), as mentioned at paragraphs 96 and 97 and as illustrated in figures 11 and 12, associated with the first checkout station (100), each of the plurality of first edge cameras (1110, 1111, 1120, 1121, 1130, 1131, 1140, 1141, 1250, 1251, 1260, 1261, 1270, 1280, 1290, 1210) having a first primary viewing area, i.e., fields of view (1111, 1121, 1131, 1141, 1251, 1261), within the first checkout station (100), and a first peripheral viewing area outside the first checkout station;
a second checkout station (100), at the first location, as mentioned at paragraph 69, last sentence, which states “[i]n some embodiments, the steps of FIG. 8 may be at least partially executed by a central server that exchanges data with a plurality of checkout terminals”;
a plurality of second edge cameras (1110, 1111, 1120, 1121, 1130, 1131, 1140, 1141, 1250, 1251, 1260, 1261, 1270, 1280, 1290, 1210) associated with the second checkout station (100), as mentioned at paragraphs 96 and 97, each of the plurality of second edge cameras (1110, 1111, 1120, 1121, 1130, 1131, 1140, 1141, 1250, 1251, 1260, 1261, 1270, 1280, 1290, 1210) having a second primary viewing area, i.e., fields of view (1111, 1121, 1131, 1141, 1251, 1261), within the second checkout station (100) and a second peripheral viewing area outside the second checkout station,
the second peripheral viewing area of at least one of the second edge cameras being within the first checkout station;
a vision mesh network having a plurality of nodes in communication with each other, at least one of the plurality of first edge cameras and at least one of the plurality of second edge cameras being nodes within the plurality of nodes on the vision mesh network and in communication with each other; and
one of the plurality of first edge cameras receiving and processing information about the first checkout station from the at least one second camera.
Regarding Claim 1, Whitelaw does not expressly teach
a first peripheral viewing area outside the first checkout station;
a second peripheral viewing area outside the second checkout station,
the second peripheral viewing area of at least one of the second edge cameras being within the first checkout station;
a vision mesh network having a plurality of nodes in communication with each other, at least one of the plurality of first edge cameras and at least one of the plurality of second edge cameras being nodes within the plurality of nodes on the vision mesh network and in communication with each other; and
one of the plurality of first edge cameras receiving and processing information about the first checkout station from the at least one second camera.
Regarding Claim 1, Whitelaw does not expressly teach, but Landers ‘130 teaches
a first peripheral viewing area, as seen in figure 2, showing several areas throughout the store (200), peripheral/outside the first checkout station (205), as illustrated in figure 2 and as mentioned at paragraphs 21-23 and 25-26, for example;
a second peripheral viewing area, as seen in figure 2, showing several areas throughout the store (200), peripheral/outside the second checkout station (205), as illustrated in figure 2 and as mentioned at paragraphs 21-23 and 25-26, for example,
the second peripheral viewing area of at least one of the second edge cameras, i.e., any of cameras (202, 425), being within the first checkout station (120, 205, 300, 400). See also paragraphs 28, 30, 31, 37 and 40, along with paragraphs 21-23 and 25-26 for example, which states as follows.
[0021] FIG. 2 illustrates a portion of an exemplary store 200 depicting shelves, POS terminals and an exit to the store, according to aspects of the present disclosure. Store 200 includes shelving units 203 with shelves 210 and items 215 that are available for selection, purchase, etc. Multiple shelving units 203 may be arranged in the store 200 to form aisles through which customers may navigate.
[0022] The store 200 includes a plurality of sensor modules 202 disposed in the ceiling 201. A POS system of the store 200 may use information gathered by sensor modules in determining items being purchased by a customer. For example, a POS system may receive imagery of a customer placing a box of corn flakes in the customer's basket and store a record that the customer picked up the box of corn flakes for use (e.g., as a reference) when the customer is checking out. Each sensor module 202 may include one or more types of sensors, such as visual sensors (e.g., cameras), audio sensors (e.g., microphones), and motion sensors. Sensor modules 202 may also include actuating devices for orienting the sensors. Sensor modules or individual sensors may generally be disposed at any suitable location within the store 200. Some non-limiting examples of alternative locations include below, within, or above the floor 230, within other structural components of the store 200 such as a shelving unit 203 or walls. In some embodiments, sensors may be disposed on, within, or near product display areas such as shelving unit 203. The sensors may also be oriented toward an expected location of a customer interaction with items, to provide data about the interaction, such as determining the customer's actions.
[0023] Store 200 also includes a number of POS terminals (e.g., kiosks) 205. Each POS terminal 205 may include computing devices or portions of computing systems, and may include various I/O devices, such as visual displays, audio speakers, cameras, microphones, key pads, and touchscreens for interacting with the customer. According to aspects of the disclosure, a POS terminal 205 may identify items a customer is purchasing, for example, by determining the items from images of the items.
[0025] In some embodiments, the shelving unit 203 may include attached and/or embedded visual sensors or other sensor devices or I/O devices. The sensors or devices may communicate with networked computing devices within the store 200. A POS system may use information gathered by sensors on a shelving unit to determine items being purchased by a customer. For example, the front portions 220 of shelves 210 may include video sensors oriented outward from the shelving unit 203 to capture customer interactions with items 215 on the shelving unit 203, and the data from the video sensors may be provided to a POS system for use in determining items in the customer's basket when the customer is checking out.
[0026] A POS system of the store 200 may utilize sensor modules 202 to build a transaction for customer 240. The POS system may recognize various items 215 picked up and placed in a bag or basket by the customer 240. The POS system may also recognize the customer 240, for example, by recognizing the customer's face or a mobile computing device 245 carried by the customer 240. The POS may associate each item 215 picked up by the customer 240 with the customer 240 to build a transaction for the customer 240.
[0027] FIG. 3 illustrates an exemplary POS terminal 300, according to one embodiment of the present disclosure. POS terminal 300 is generally similar in structure and function to POS terminal 205. POS terminal 300 includes a base portion 312, one or more vertical portions 311, 313, a support member 314 for supporting a shopping basket 343, and a credit card reader 322. POS terminal 300 includes a camera 320 oriented for identifying store items in a shopping basket 343. POS terminal 300 may include a touchscreen or display 318 and camera 317 that are generally oriented toward customers using the POS terminal, e.g., customer 350. Vertical portion 311 may also include a plurality of indicator lights 315.
[0028] In some embodiments of the present disclosure, the camera 320 may be oriented such that it can view both items in the shopping basket 343 and a customer 350 using the POS terminal 300. The camera 320 may be oriented to view items in the shopping basket 343 and a customer using the POS terminal 300 by placing the camera 320 high on the POS terminal 300, using a motor to move the camera 320 to change the viewpoint of the camera 320, or supplying the camera 320 with a wide-angle lens.
[0029] The support member 314 may have markings indicating where the basket 343 should be positioned during operation of the POS terminal 300. Similarly, the display 318 may present messages to assist a customer in positioning the basket 343. Indicator lights 315 may also be used to indicate proper or improper basket positioning on support member 314. The support member 314 may also include a scale for determining the weight of the basket 343. The weight of the basket may be used in identifying items within the basket 343.
[0030] According to aspects of the present disclosure, a POS system may simultaneously identify items 362, 364 in the basket 343 and the customer 350. The POS system may identify the items 362, 364 based on one or more images of the items captured by the cameras 317, 320. The POS system may identify the items based on barcodes, quick response (QR) codes, reflected light, colors, sizes, weight ranges, packaging dimensions, packaging shapes, and graphical design of the items. For example, a POS system may capture an image of basket 343 using camera 320. The POS system may determine that item 364 is a bag of chips based on a bar-code in the image, and item 362 is an apple, based on its color in the image and its weight, which is determined by the scale in support member 314.
[0031] The POS system may identify the customer 350 based on an image of the customer captured by cameras 317 and/or 320 (e.g., by use of facial recognition software), based on a mobile computing device carried by the customer (e.g., based on an app running on the customer's smartphone), based on the customer's voice (e.g., using voice recognition software), and/or based on a gesture made by the customer (e.g., captured using a touch-capable implementation of the display 318 at the POS terminal 300). For example, a POS system may receive an image of customer 350 from camera 317 on POS terminal 300, and the POS system may use facial recognition software with the image to determine that customer 350 is Susan Jones.
[0037] FIG. 4 illustrates an exemplary checkout area 400, according to one embodiment of the present disclosure. Checkout area 400 may be associated with or be a part of store 100 or store 200. Checkout area 400 includes two exemplary checkout lanes 405a and 405b, but other numbers of checkout lanes are included in the scope of the disclosure.
[0038] Each checkout lane may include a plurality of dividers 410L, 410R that bound each checkout lane. While the checkout lanes are shown with dividers, the dividers are optional, and checkout lanes may be bounded by markings on the floor or other means. As shown, the dividers 410L, 410R are attached to framing in the ceiling, but alternative embodiments may have one or more dividers attached to the floor or free-standing.
[0039] One or more of the dividers 410L, 410R may include input/output devices for customer interaction, such as a display 415. Other input/output devices such as audio speakers, a touchscreen, a keypad, a microphone, etc. may also be included.
[0040] The dividers may include cameras 420L, 420R for capturing images of items 430 included in shopping cart 440. The cameras 420L, 420R may be oriented toward an expected position of the shopping cart 440, such as relative to a segment of lane lines 412. The images may be analyzed based on properties of the items 430, as well as labeling such as barcodes 435. A separate camera 425 may be included in a checkout lane 405 for capturing additional images of the items 430 and/or images of the customer 401. A POS system may analyze the images of the items 430 to determine the items being purchased by the customer 401. For example, a POS system may determine the items being purchased by scanning one or more images and reading a bar-code on an item, reading a quick reference (QR) code from an item, reading a label from an item, and/or looking up the color, size, or shape of the item in a database of store items. A POS system may also combine the previously mentioned techniques and use incomplete information from a technique, either alone or in combination with another technique. For example, a POS system may read a partial bar-code and determine a color of an item from one or more images, then determine the item by looking up the partial bar-code in a database and determining a group of items matching the partial bar-code, and then using the determined color to select one item from the group. In a second example, a POS system may read a partial bar-code from an item, look up the partial bar-code in a store inventory database, and determine that only item in the inventory database matches the partial bar-code. A POS system may also use information provided from other types of sensors to determine items being purchased. For example, a POS system may include radio-frequency identification (RFID) scanners and determine items being purchased by scanning RFID chips included in the items.
Emphasis provided.
Regarding Claim 1, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to have provided a first peripheral viewing area outside the first checkout station and a second peripheral viewing area, outside the second checkout station, the second peripheral viewing area of at least one of the second edge cameras being within the first checkout station;
as taught by Landers ’130, in Whitelaw’s checkout terminal system for the purpose of increasing security by adding various cameras in peripheral areas of the store where the point of sale checkout system.
Regarding Claim 1, Whitelaw does not expressly teach, but Carranza teaches
a vision mesh network (856, 920), as illustrated in figures 8 and 9, and as mentioned at paragraphs 91, 95, 102 for example, having a plurality of nodes, i.e., smart cameras (220a, 220b, 220c) and surveillance orchestration device (210), as illustrated in figure 2 and as mentioned in paragraphs 49 and 50, cameras (C1, C2, C3, C4), as mentioned at paragraphs 58 and 59, internet-of-things (IOT) devices (804), as illustrated in figure 8 and as mentioned at paragraphs 90-93, 96 and 97, in communication with each other, i.e., via mesh transceiver (1162), noting that the smart cameras (220a, 220b, 220c) are considered IoT processing devices (1150) as illustrated in figure 11, for each device (1150), at least one of the plurality of first edge cameras (220a, 220b, 220c) and at least one of the plurality of second edge cameras (220a, 220b, 220c) being nodes, noting the transceiver (1162) in each camera device, within the plurality of nodes on the vision mesh network (856), as mentioned in first sentence of paragraphs 91, 95 and 99, as well as at illustrated in figures 8 and 9, and in communication with each other, as mentioned at paragraph 90, second sentence, i.e., “a number of IoT devices 804 may communicate with a gateway 854, and with each other through the gateway 854” and paragraph 102, first sentence, i.e., “[c]ommunications from any IoT device 902 may be passed along a convenient path (e.g., a most convenient path) between any of the IoT devices 902 to reach the gateways 904”; and
one of the plurality of first edge cameras (220a-220c, 902, 1150) receiving and processing information, i.e., via processor (1152) as illustrated in figure 11, about the first checkout station, as taught by Whitelaw, from the at least one second camera (220a-220c, 902, 1150).
Regarding Claim 1, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to have provided a vision mesh network having a plurality of nodes in communication with each other for each device, at least one of the plurality of first edge cameras and at least one of the plurality of second edge cameras being nodes, and
one of the plurality of first edge cameras receiving, about the first checkout station, from the at least one second camera,
as taught by Carranza, in Whitelaw’s checkout terminal system for the purpose of increasing security by adding various cameras in peripheral areas of the store where the point of sale checkout system, which are connected with each other and pass video information between them for processing at particular cameras.
Regarding Claim 2, Whitelaw does not expressly teach wherein the first peripheral viewing area of at least one of the first cameras is located within the second checkout station.
Regarding Claim 2, Whitelaw does not expressly teach, but Landers ‘130 teaches wherein the first peripheral viewing area of at least one of the first cameras (202) is located within the second checkout station (205, 300, 400).
Regarding Claim 3, Whitelaw does not expressly teach wherein the one first camera receives information from the plurality of first cameras and the at least one second camera.
Regarding Claim 3, Whitelaw does not expressly teach wherein the one first camera (220a-220c, 902, 1150) receives information from the plurality of first cameras (220a-220c, 902, 1150) and the at least one second camera (220a-220c, 902, 1150) as mentioned at paragraphs 49-50, paragraphs 57-59, 90, 114, 115, 117, 119 and 123, for example.
Regarding Claim 4, Whitelaw teaches wherein the information includes images, i.e., cameras (121-124) as mentioned in paragraph 28, and noting the cameras in paragraphs 96 and 97, for example.
Regarding Claim 5, see the rejection of Claims 1-4, above.
Regarding Claim 6, see the rejection of Claim 4, above.
Regarding Claim 7, Whitelaw does not expressly teach wherein each of the plurality of first edge cameras and each of the plurality of second edge cameras are nodes within the plurality of nodes on the vision mesh network and in communication with each other.
Regarding Claim 7, Whitelaw does not expressly teach, but Carranza teaches wherein each of the plurality of first edge cameras (220a-220c, 902, 1150) and each of the plurality of second edge cameras (220a-220c, 902, 1150) are nodes within the plurality of nodes on the vision mesh network and in communication with each other, as mentioned in first sentence of paragraphs 91, 95 and 99, as well as at illustrated in figures 8 and 9, and in communication with each other, as mentioned at paragraph 90, second sentence, i.e., “a number of IoT devices 804 may communicate with a gateway 854, and with each other through the gateway 854” and paragraph 102, first sentence, i.e., “[c]ommunications from any IoT device 902 may be passed along a convenient path (e.g., a most convenient path) between any of the IoT devices 902 to reach the gateways 904”.
Note that it has been held that mere duplication of the essential working parts of a device involves only routine skill in the art. See St. Regis Paper Co. v. Bemis Co., 193 USPQ 8.
Regarding Claim 8, see the rejection of Claim 1, above, noting that it is considered a matter of design choice as to how many checkout counters and how many sets of cameras based upon the amount of customer throughput desired to be processed.
Regarding Claim 9, Whitelaw teaches further comprising two to four more
checkout stations (100) at the first location, as mentioned at paragraph 69, last sentence, which states “[i]n some embodiments, the steps of FIG. 8 may be at least partially executed by a central server that exchanges data with a plurality of checkout terminals”;.
Regarding Claim 10, see the rejection of Claim 7, above.
Regarding Claim 11, Whitelaw teaches, wherein the first primary viewing area
includes a target, i.e., noting that each camera (121-124, 153, 154, 1110, 1120, 1130, 1140, 1250, 1260, 1270, 1280, 1290) have fields of view (1111, 1121, 1131, 1141, 1251, 1261, 1271, 1281, 1291), as mentioned at paragraphs 96 and 97, for example.
Regarding Claim 12, Whitelaw teaches wherein the target includes a scanner
platter (120), as mentioned in paragraph 28, a scale (120), a scanner (134), as mentioned in paragraph 29, for example, a shopping cart, a handbasket, a bagging area or a payment area, i.e., card reader (132) as mentioned at paragraph 29 and as illustrated in figures 11-17, for example.
Regarding Claim 13, see the rejection of Claim 1.
Regarding Claim 14, see the rejection of Claim 1.
Regarding Claim 15, see the rejection of Claims 1-3.
Regarding Claim 16, see the rejection of Claim 1.
Regarding Claim 17, see the rejection of Claim 4.
Regarding Claim 18, see the rejection of Claim 1.
Regarding Claim 19, see the rejection of Claim 1.
Regarding Claim 20, see the rejection of Claim 1.
Conclusion
Applicant is encouraged to contact the Examiner should there be any questions about this rejection or in an endeavor to explore potential amendments or potential allowable subject matter.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bobbitt ‘094 is cited as teaching smart cameras (104-1, 104-3, 104-4) including both video feed and computing power, along with a camera (104-2) with pure video feed, as well as computing device (104-n) as well as other devices in figure 3, showing mesh network (106).
Clavenna ‘077 is cited as teaching a host camera pair, i.e., primary camera system (210) and remote video camera system (220) as illustrated in figure 2.
Sanil ‘625 is cited as teaching POS system (118) connected to camera (102) with processing system (106) that produces images (122) as illustrated in figure 1, for example. See also figure 16, with cameras (1602, 1604, 1606) as well as other cameras surrounding the checkout POS at figure 16, for example. Note also the targets (410, 402, 504, 604, 702, 802, 904, 1104) for example as mentioned at figures 4-12, for example.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEFFREY ALAN SHAPIRO whose telephone number is (571)272-6943. The examiner can normally be reached Monday-Friday generally between 8:30AM and 6:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Y Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEFFREY A SHAPIRO/Primary Examiner, Art Unit 3619
January 4, 2026