Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1, 11, and 18 are currently amended. Claims 2, 4, 6, 8, 12, 14, 16-17, and 19 are cancelled. No claims have been added. Claims 1, 3, 5, 7, 9-11, 13, 15, 18, and 20 are currently under review.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 21, 2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1, 3, 5, 7, 9-11, 13, 15, 18, and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 11, 13, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Komaki et al. (Pub. No.: US 2022/0019293 A1) hereinafter referred to as Komaki in view of Gladkov et al. (Pub. No.: US 2021/0090315 A1) hereinafter referred to as Gladkov, in view of Jennings (Pub. No.: US 2022/0365897 A1), in view of Petrov et al. (Pub. No.: US 2023/0252736 A1) hereinafter referred to as Petrov, in view of Yamazaki et al. (Patent No.: US 11,106,353 B1) hereinafter referred to as Yamazaki, and in view of Grossman et al. (Pub. No.: US 2019/0221185 A1) hereinafter referred to as Grossman.
With respect to Claim 1, Komaki teaches eyewear (figs. 1 & 3, item 100; ¶29; ¶102), comprising: a frame (figs. 1 & 3, item 101; ¶29) having a first side (fig. 3, item 107; ¶29) and a second side (fig. 3, item 106; ¶29) and configured to be worn by a user having a head and hands (¶43); a first processor (fig. 3, item 11; ¶68) adjacent the first side of the frame, the first processor coupled to a camera (figs. 1 and 7, coupled to item 13R: camera via electrical connections; ¶17; ¶102); a second processor (fig. 3, item 11*; ¶68) adjacent the second side, the second processor coupled to the first processor and to a display (figs. 3 and 7, item 12L; ¶35; coupled via electrical connections, display ~ display 12L; ¶65, “data processors 11 and 11* having the same functions”; ¶68); and a bus (¶102, a system bus of the data processor 11 is connected to a display and a camera 13).
Komaki does not teach the first processor as a first system on a chip (SoC) and the second processor as a second SoC nor does Komaki teach an interface extending between the first and second SoCs.
Gladkov teaches eyewear (fig. 2B, item 112; ¶63), comprising: a frame (¶64, “front frame”); a first system on chip (SoC) (fig. 6, item 630A; ¶108-109); a second SoC (fig. 6, item 630B; ¶109); and an interface (fig. 6, item 684; ¶108) extending between the first and second SoCs.
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the eyewear of Komaki, such that the first processor is a first system on a chip (SoC), the second processor is a second SoC, and an interface extends between the first and second SoCs, as taught by Gladkov so as to constrain rendering complexity at the application level in response to resource availability (¶10).
Komaki and Gladkov combined do not teach an interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first SoC to the second SoC for bulk image transmission.
Jennings teaches an interface comprising a mobile industry processor interface (MIPI) (¶28), the interface (fig. 2, item 212; ¶35; fig. 3D, item 382; ¶49) configured to convert display serial interface (DSI) images of a first processor/SoC (fig. 2, item 210; ¶50, “compress or decompress data between an SoC (or processor) and display”) to camera serial interface (CSI) images (¶35; ¶49), wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor for bulk image transmission (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist) and a data bus (¶35).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki and Gladkov, to utilize the interface of Jennings resulting in the interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SOC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first SoC to the second SoC for bulk image transmission, so as to facilitate data conversion between data streams of D physical layer and data streams of C physical layer (¶7).
Komaki, Gladkov, and Jennings combined do not teach the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user.
Petrov teaches an electronic device (fig. 3, item 120; ¶30, “the electronic device 120 includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached”), comprising: a processor (fig. 3, item 302; ¶58) configured to: support a computer vision (CV) algorithm (¶84, “object recognition information 462 associated with physical objects recognized within the physical environment 105 (e.g., based on a classification algorithm, computer vision (CV) techniques, or the like)”; ¶119, “the one or more input devices correspond to a computer vision (CV) engine that uses an image stream from one or more exterior-facing image sensors, a finger/hand/extremity tracking engine, an eye tracking engine, a touch-sensitive surface, one or more microphones, and/or the like”); generating metadata of the CV algorithm (fig. 4B, item 445 comprising item 462; fig. 4C, item 445; ¶115; ¶120; ¶123; ¶131-132), and render images (fig. 4C, item 454; ¶53) including a location of a user’s head (¶36; ¶42; ¶64), location and orientation of the user’s hands (¶36; ¶64), and low-resolution depth map of a scene in front of the user (¶31, “the optional remote input devices correspond to fixed or movable sensory equipment within the physical environment 105 (e.g., image sensors, depth sensors, infrared (IR) sensors, event cameras, microphones, etc.)”; ¶59, “one or more depth sensors (e.g., structured light, time-of-flight, LiDAR, or the like)” – LiDAR sensors are low resolution).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined eyewear of Komaki, Gladkov, and Jennings, such that capabilities of the processor of Petrov are implemented on a system on chip, resulting in the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user, as taught by Petrov so as to detect or identify objects (¶115).
Komaki, Gladkov, Jennings, and Petrov combined do not mention a bus separate from the interface and configured to send metadata of the CV algorithm from the second SoC back to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the send SoC.
Yamazaki teaches an information processing apparatus (fig. 1, item 1; column 3, lines 26-29), comprising: a first system (fig. 3, item 10; column 3, lines 59-64); and a second SoC (fig. 3, item 40; column 5, lines 4-20); an interface (fig. 3, item 51; column 5, lines 21-26) comprising a mobile industry processor interface (MIPI) extending between the first system and the second SoC (column 10, lines 33-41; and a bus separate from the interface and configured to send data from the second SoC back to the first system (fig. 8; column 15, lines 27-31), wherein the second SoC is configured to instruct the first system via the bus to output data to be sent by the first system to the second SoC (column 7, lines 55-65; column 9, lines 38-47 and 53-30; column 15, lines 16-20, “In another example, the SoC 40 (40a) may use other interfaces for acquisition and output of the data”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, and Petrov, such that the first system corresponds to the first SoC and data corresponds to metadata of the CV algorithm or instructions to adjust and render a next image frame, resulting in a bus separate from the interface and configured to send metadata of the CV algorithm from the second SoC back to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image to be sent by the first SoC to the send SoC including a location of the user’s head and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user, as taught by Yamazaki so as to implement an input device with a high degree of freedom (column 1, lines 30-31) and provide an alternative to the interface for sending instructions and data.
Gladkov teaches the first SoC performs video processing (¶106) and the second SoC is for outputting images (¶109). Komaki, Gladkov, Jennings, Petrov, and Yamazaki combined do not mention wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC.
Grossman teaches eyewear (fig. 3, item 300; ¶40 – augmented reality headset), comprising: a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov and Yamazaki, wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27) and is common to have metadata accompany images/video data (¶23).
With respect to Claim 3, claim 1 is incorporated, Komaki, Gladkov, Jennings, Petrov, and Yamazaki combined do not mention wherein the second SoC has a buffer configured to split-up the images and the metadata.
Grossman teaches eyewear (fig. 3, item 300; ¶40 – augmented reality headset), comprising: a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42); wherein the first SoC has a buffer configured to split-up the images and the metadata (¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov and Yamazaki, wherein either the first Soc or the second SoC or both has a buffer configured to split-up the images and the metadata, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 11, Komaki teaches a method (¶164; ¶166; claim 3) of use of eyewear (figs. 1 & 3, item 100; ¶29; ¶102) including a frame (figs. 1 & 3, item 101; ¶29) having a first side (fig. 3, item 107; ¶29) and a second side (fig. 3, item 106; ¶29) and configured to be worn by a user having a head and hands (¶43), a first processor (fig. 3, item 11; ¶68) adjacent the first side of the frame and coupled to a camera (figs. 1 and 7, coupled to item 13R: camera via electrical connections; ¶17; ¶102), a second processor (fig. 3, item 11*; ¶68) adjacent the second side and coupled to a display (figs. 3 and 7, item 12L; ¶35; coupled via electrical connections, display ~ display 12L; ¶65, “data processors 11 and 11* having the same functions”; ¶68), the second processor coupled to the first processor; and a bus (¶102, a system bus of the data processor 11 is connected to a display and a camera 13).
Komaki does not teach the first processor as a first system on a chip (SoC) and the second processor as a second SoC nor does Komaki teach coupling of the second processor to the first processor by an interface extending between the first and second SoCs.
Gladkov teaches a method (¶133; claim 14) of use of eyewear (fig. 2B, item 112; ¶63) including a frame (¶64, “front frame”); a first system on chip (SoC) (fig. 6, item 630B; ¶109); a second SoC (fig. 6, item 630C; ¶109); and coupling by an interface (fig. 6, item 684; ¶108) extending between the first and second SoCs.
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of Komaki, such that the first processor is a first system on a chip (SoC), the second processor is a second SoC, and coupling by an interface extending between the first and second SoCs, as taught by Gladkov so as to constrain rendering complexity at the application level in response to resource availability (¶10).
Komaki and Gladkov combined do not teach an interface comprising a mobile industry processor (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor for bulk image transmission, and a bus, the method comprising: the first SoC processing DSI images; the interface converting the DSI images to CSI images; and the interface unidirectionally sending the CSI images to the second SoC.
Jennings teaches a method of using an interface between a first processor/SoC and a second processor/SoC, the interface is a mobile industry processor interface (MIPI) (¶28) and the interface (fig. 2, item 212; ¶35; fig. 3D, item 382; ¶49) is configured to convert display serial interface (DSI) images of a first processor/SoC (fig. 2, item 210; ¶50, “compress or decompress data between an SoC (or processor) and display”) to camera serial interface (CSI) images (¶35; ¶49), wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor/SoC for bulk image transmission (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist), and a data bus (¶35), the method comprising: the first processor/SoC processing DSI images (¶35, “a processor or camera 210 … a compression method is used for enhancing efficiency of data transmission. For example, camera 210 includes a compressor 220 which compresses outgoing D-PHY data steam before transmission”); the interface converting the DSI images to CSI images (¶35, “A function of bridge device 212 is to bridge or convert data streams for transmission between D-PHY and C-PHY”); and the interface unidirectionally sending the CSI images to the second SoC (fig. 2; ¶35, “Bridge device 212 also includes a compressor 224 which compresses outgoing C-PHY data steam before transmission”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki and Gladkov, to utilize the interface of Jennings resulting in an interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor for bulk image transmission, and a data bus, the method comprising: the first SoC processing DSI images; the interface converting the DSI images to CSI images; and the interface unidirectionally sending the CSI images to the second SoC, so as to facilitate data conversion between data streams of D physical layer and data streams of C physical layer (¶7).
Komaki, Gladkov, and Jennings combined do not teach the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user.
Petrov teaches a method (figs. 7-8; ¶119; ¶141) of using an electronic device (fig. 3, item 120; ¶30, “the electronic device 120 includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached”), the method comprising: supporting a computer vision (CV) algorithm (¶84, “object recognition information 462 associated with physical objects recognized within the physical environment 105 (e.g., based on a classification algorithm, computer vision (CV) techniques, or the like)”; ¶119, “the one or more input devices correspond to a computer vision (CV) engine that uses an image stream from one or more exterior-facing image sensors, a finger/hand/extremity tracking engine, an eye tracking engine, a touch-sensitive surface, one or more microphones, and/or the like”); generating metadata of the CV algorithm (fig. 4B, item 445 comprising item 462; fig. 4C, item 445; ¶115; ¶120; ¶123; ¶131-132), and render images (fig. 4C, item 454; ¶53; ¶133) including a location of a user’s head (¶36; ¶42; ¶64), location and orientation of the user’s hands (¶36; ¶64), and low-resolution depth map of a scene in front of the user (¶31, “the optional remote input devices correspond to fixed or movable sensory equipment within the physical environment 105 (e.g., image sensors, depth sensors, infrared (IR) sensors, event cameras, microphones, etc.)”; ¶59, “one or more depth sensors (e.g., structured light, time-of-flight, LiDAR, or the like)” – LiDAR sensors are low resolution).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined method of Komaki, Gladkov, and Jennings, such that the method comprises the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user, as taught by Petrov so as to detect or identify objects (¶115).
Komaki, Gladkov, Jennings, and Petrov combined do not teach a bus separate from the interface and configured to send metadata of the CV algorithm back from the second SoC to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image to be sent by the first SoC to the second SoC such that a method comprises: the second SoC sending metadata of the CV algorithm back to the first SoC via the bus such that the first SoC adjusts and renders a next image to be sent to the second SoC, and the first SoC sending the next image to the second SoC.
Yamazaki teaches an information processing apparatus (fig. 1, item 1; column 3, lines 26-29), comprising: a main memory executing programs including an OS, various types of drivers to operate peripherals as hardware, various types of service/utility, and application programs (fig. 2, item 12; column 4, lines 1-8); a first system (fig. 3, item 10; column 3, lines 59-64); and a second SoC (fig. 3, item 40; column 5, lines 4-20); an interface (fig. 3, item 51; column 5, lines 21-26) comprising a mobile industry processor interface (MIPI) extending between the first system and the second SoC (column 10, lines 33-41; and a bus separate from the interface and configured to send data back from the second SoC to the first system (fig. 8; column 15, lines 27-31), wherein the second SoC is configured to instruct the first system via the bus to output data to be sent by the first system to the second SoC (column 15, lines 16-20, “In another example, the SoC 40 (40a) may use other interfaces for acquisition and output of the data”), the programs (column 3, lines 26-29) comprising a method comprising: the second SoC sending data back to the first system via the bus (fig. 8; column 15, lines 27-31) such that the first system outputs data to be sent to the second SoC, and the first system sending the data to the second SoC (column 7, lines 55-65; column 9, lines 38-47 and 53-30; column 15, lines 16-20).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki, Gladkov, Jennings, and Petrov, such that the first system corresponds to the first SoC and data corresponds to metadata of the CV algorithm or instructions to adjust and render a next image frame, resulting in a bus separate from the interface and configured to send metadata of the CV algorithm back from the second SoC to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image to be sent by the first SoC to the send SoC, the method comprising: the second SoC sending metadata of the CV algorithm back to the first SoC via the bus such that the first SoC adjusts and renders a next image frame to be sent to the second SoC, and the first SoC sending the next image to the second SoC, as taught by Yamazaki so as to implement an input device with a high degree of freedom (column 1, lines 30-31) and provide an alternative to the interface for sending instructions and data.
Gladkov teaches the first SoC performs video processing (¶106) and the second SoC is for outputting images (¶109). Komaki, Gladkov, Jennings, Petrov and Yamazaki combined do not mention wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC.
Grossman teaches a method (¶60; ¶70; ¶90; claim 1) of use of eyewear (fig. 3, item 300; ¶40 – augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki, Gladkov, Jennings, Petrov and Yamazaki, wherein the interface is configured to transmit both images and metadata from the first SoC to the second SoC, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 13, claim 12 is incorporated, Komaki, Gladkov, Jennings, Petrov, and Yamazaki combined do not mention wherein second SoC has a buffer, the buffer splitting-up the images and the metadata.
Grossman teaches a method (¶60; ¶70; ¶90; claim 1) of use of eyewear (fig. 3, item 300; ¶40 – augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42); wherein a first SoC has a buffer, the buffer splitting-up the images and the metadata (¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki, Gladkov, Jennings, Petrov, and Yamazaki, wherein either the first Soc or the second SoC or both has a buffer, the buffer splitting-up the images and the metadata, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 18, Komaki teaches a non-transitory computer readable medium (fig. 7, item 11b; ¶99) including instructions for operating an eyewear device (figs. 1 & 3, item 100; ¶29; ¶102) including a frame (figs. 1 & 3, item 101; ¶29) having a first side (fig. 3, item 107; ¶29) and a second side (fig. 3, item 106; ¶29) and configured to be worn by a user having a head and hands (¶43), a first processor (fig. 3, item 11; ¶68) adjacent the first side of the frame, the first processor coupled to a camera (figs. 1 and 7, coupled to item 13R: camera via electrical connections; ¶17; ¶102), a second processor (fig. 3, item 11*; ¶68) adjacent the second side, the second processor coupled a display (figs. 3 and 7, item 12L; ¶35; coupled via electrical connections, display ~ display 12L; ¶65, “data processors 11 and 11* having the same functions”; ¶68), the second processor coupled to the first processor (via electronic components/elements); and a data bus (¶102, a system bus of the data processor 11 is connected to a display and a camera 13).
Komaki does not teach the first processor as a first system on a chip (SoC) and the second processor as a second SoC nor does Komaki teach coupling of the second processor to the first processor by an interface extending between the first and second SoCs.
Gladkov teaches a non-transitory computer readable medium (¶133) including instructions for operating an eyewear device (fig. 2B, item 112; ¶63) including a frame (¶64, “front frame”); a first system on chip (SoC) (fig. 6, item 630B; ¶109); a second SoC (fig. 6, item 630C; ¶109); and coupling by an interface (fig. 6, item 684; ¶108) extending between the first and second SoCs.
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the non-transitory computer readable medium of Komaki, such that the first processor is a first system on a chip (SoC), the second processor is a second SoC, and coupling by an interface extending between the first and second SoCs, as taught by Gladkov so as to constrain rendering complexity at the application level in response to resource availability (¶10).
Komaki and Gladkov combined do not teach the interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first SoC to the second SoC for bulk image transmission, the method comprising: the first SoC to process DSI images; the interface to convert the DSI images to CSI images; and the interface to unidirectionally send the CSI images to the second SoC.
Jennings teaches a method of using an interface between a first processor/SoC and a second processor/SoC, the interface is a mobile industry processor interface (MIPI) (¶28) and the interface (fig. 2, item 212; ¶35; fig. 3D, item 382; ¶49) configured to convert display serial interface (DSI) images of a first processor/SoC (fig. 2, item 210; ¶50, “compress or decompress data between an SoC (or processor) and display”) to camera serial interface (CSI) images (¶35; ¶49), wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor/SoC for bulk image transmission (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist), and a data bus (¶35), the method comprising: the first processor/SoC to process DSI images (¶35, “a processor or camera 210 … a compression method is used for enhancing efficiency of data transmission. For example, camera 210 includes a compressor 220 which compresses outgoing D-PHY data steam before transmission”); the interface to convert the DSI images to CSI images (¶35, “A function of bridge device 212 is to bridge or convert data streams for transmission between D-PHY and C-PHY”); and the interface to unidirectionally send the CSI images to the second SoC (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki and Gladkov, to utilize the interface of Jennings resulting in the interface is a mobile industry processor interface (MIPI) and the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor/SoC for bulk image transmission, the method comprising: the first SoC to process DSI images; the interface to convert the DSI images to CSI images; and the interface to unidirectionally send the CSI images to the second SoC, so as to facilitate data conversion between data streams of D physical layer and data streams of C physical layer (¶7).
Komaki, Gladkov, and Jennings combined do not teach the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user.
Petrov teaches an electronic device (fig. 3, item 120; ¶30, “the electronic device 120 includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached”), comprising: a processor (fig. 3, item 302; ¶58); and a computer readable storage medium including instructions for operating the electronic device (fig. 3, item 320; ¶62); the processor configured to: support a computer vision (CV) algorithm (¶84, “object recognition information 462 associated with physical objects recognized within the physical environment 105 (e.g., based on a classification algorithm, computer vision (CV) techniques, or the like)”; ¶119, “the one or more input devices correspond to a computer vision (CV) engine that uses an image stream from one or more exterior-facing image sensors, a finger/hand/extremity tracking engine, an eye tracking engine, a touch-sensitive surface, one or more microphones, and/or the like”); generating metadata of the CV algorithm (fig. 4B, item 445 comprising item 462; fig. 4C, item 445; ¶115; ¶120; ¶123; ¶131-132), and render images (fig. 4C, item 454; ¶53) including a location of a user’s head (¶36; ¶42; ¶64), location and orientation of the user’s hands (¶36; ¶64), and low-resolution depth map of a scene in front of the user (¶31, “the optional remote input devices correspond to fixed or movable sensory equipment within the physical environment 105 (e.g., image sensors, depth sensors, infrared (IR) sensors, event cameras, microphones, etc.)”; ¶59, “one or more depth sensors (e.g., structured light, time-of-flight, LiDAR, or the like)” – LiDAR sensors are low resolution).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined non-transitory computer readable medium of Komaki, Gladkov, and Jennings, such that the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user, as taught by Petrov so as to detect or identify objects (¶115).
Komaki, Gladkov, Jennings, and Petrov combined do not teach a bus separate from the interface and configured to send metadata of the CV algorithm back from the second SoC to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the second SoC such that a method comprises: the second SoC sending metadata of the CV algorithm back to the first SoC via the bus such that the first SoC adjusts and renders a next image to be sent to the second SoC, and the first SoC sending the next image to the second SoC.
Yamazaki teaches an information processing apparatus (fig. 1, item 1; column 3, lines 26-29), comprising: a main memory executing programs including an OS, various types of drivers to operate peripherals as hardware, various types of service/utility, and application programs (fig. 2, item 12; column 4, lines 1-8); a first system (fig. 3, item 10; column 3, lines 59-64); and a second SoC (fig. 3, item 40; column 5, lines 4-20); an interface (fig. 3, item 51; column 5, lines 21-26) comprising a mobile industry processor interface (MIPI) extending between the first system and the second SoC (column 10, lines 33-41; and a bus separate from the interface and configured to send data back from the second SoC to the first system (fig. 8; column 15, lines 27-31), wherein the second SoC is configured to instruct the first system via the bus to output data to be sent by the first system to the second SoC (column 15, lines 16-20, “In another example, the SoC 40 (40a) may use other interfaces for acquisition and output of the data”), the programs (column 3, lines 26-29) comprising instructions configured to instruct: the second SoC sending data back to the first system via the bus(fig. 8; column 15, lines 27-31) such that the first system outputs data to be sent to the second SoC, and the first system sending the data to the second SoC (column 7, lines 55-65; column 9, lines 38-47 and 53-30; column 15, lines 16-20).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki, Gladkov, Jennings, and Petrov, such that the first system corresponds to the first SoC and data corresponds to metadata of the CV algorithm or instructions to adjust and render a next image frame, resulting in a bus separate from the interface and configured to send metadata of the CV algorithm back from the second SoC to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the send SoC; the programs comprising instructions configured to instruct: the second SoC sending metadata of the CV algorithm back to the first SoC via the bus such that the first SoC adjusts and renders a next image frame to be sent to the second SoC, and the first SoC sending the next image frame to the second SoC, as taught by Yamazaki so as to implement an input device with a high degree of freedom (column 1, lines 30-31) and provide an alternative to the interface for sending instructions and data.
Gladkov teaches the first SoC performs video processing (¶106) and the second SoC is for outputting images (¶109). Komaki, Gladkov, Jennings, Petrov and Yamazaki combined do not mention wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC.
Grossman teaches an environment (fig. 3, item 300; ¶40 – includes an augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); a computer readable medium (fig. 3, item 323; ¶60); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki, Gladkov, Jennings, Petrov and Yamazaki, wherein the interface is configured to transmit both images and metadata from the first SoC to the second SoC, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 20, claim 19 is incorporated, Komaki, Gladkov, Jennings, Petrov, and Yamazaki combined do not mention wherein the instructions when executed by the eyewear device instruct a buffer of the second SoC to split-up the images and the metadata.
Grossman teaches a non-transitory computer readable medium (¶47) including instructions for operating an eyewear device (fig. 3, item 300; ¶40 – augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the instructions when executed by the eyewear device instruct a buffer of the first SoC to split-up the images and the metadata (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki, Gladkov, Jennings, Petrov, and Yamazaki, wherein the instructions when executed by the eyewear device instruct a buffer of the first SoC, a second SoC or both to split-up the images and the metadata, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
Claims 1, 3, 11, 13, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Komaki, in view of Gladkov, in view of Jennings, in view of Petrov, and in view of Rozen et al. (Pub. No.: US 2019/0087225 A1) hereinafter referred to as Rozen, and in view of Grossman.
With respect to Claim 1, Komaki teaches eyewear (figs. 1 & 3, item 100; ¶29; ¶102), comprising: a frame (figs. 1 & 3, item 101; ¶29) having a first side (fig. 3, item 107; ¶29) and a second side (fig. 3, item 106; ¶29) and configured to be worn by a user having a head and hands (¶43); a first processor (fig. 3, item 11; ¶68) adjacent the first side of the frame, the first processor coupled to a camera (figs. 1 and 7, coupled to item 13R: camera via electrical connections; ¶17; ¶102); a second processor (fig. 3, item 11*; ¶68) adjacent the second side, the second processor coupled to the first processor and to a display (figs. 3 and 7, item 12L; ¶35; coupled via electrical connections, display ~ display 12L; ¶65, “data processors 11 and 11* having the same functions”; ¶68); and a bus (¶102, a system bus of the data processor 11 is connected to a display and a camera 13).
Komaki does not teach the first processor as a first system on a chip (SoC) and the second processor as a second SoC nor does Komaki teach an interface extending between the first and second SoCs.
Gladkov teaches eyewear (fig. 2B, item 112; ¶63), comprising: a frame (¶64, “front frame”); a first system on chip (SoC) (fig. 6, item 630A; ¶108-109); a second SoC (fig. 6, item 630B; ¶109); and an interface (fig. 6, item 684; ¶108) extending between the first and second SoCs.
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the eyewear of Komaki, such that the first processor is a first system on a chip (SoC), the second processor is a second SoC, and an interface extends between the first and second SoCs, as taught by Gladkov so as to constrain rendering complexity at the application level in response to resource availability (¶10).
Komaki and Gladkov combined do not teach an interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first SoC to the second SoC for bulk image transmission.
Jennings teaches an interface comprising a mobile industry processor interface (MIPI) (¶28), the interface (fig. 2, item 212; ¶35; fig. 3D, item 382; ¶49) configured to convert display serial interface (DSI) images of a first processor/SoC (fig. 2, item 210; ¶50, “compress or decompress data between an SoC (or processor) and display”) to camera serial interface (CSI) images (¶35; ¶49), wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor for bulk image transmission (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist) and a data bus (¶35).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki and Gladkov, to utilize the interface of Jennings resulting in the interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SOC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first SoC to the second SoC for bulk image transmission, so as to facilitate data conversion between data streams of D physical layer and data streams of C physical layer (¶7).
Komaki, Gladkov, and Jennings combined do not teach the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user.
Petrov teaches an electronic device (fig. 3, item 120; ¶30, “the electronic device 120 includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached”), comprising: a processor (fig. 3, item 302; ¶58) configured to: support a computer vision (CV) algorithm (¶84, “object recognition information 462 associated with physical objects recognized within the physical environment 105 (e.g., based on a classification algorithm, computer vision (CV) techniques, or the like)”; ¶119, “the one or more input devices correspond to a computer vision (CV) engine that uses an image stream from one or more exterior-facing image sensors, a finger/hand/extremity tracking engine, an eye tracking engine, a touch-sensitive surface, one or more microphones, and/or the like”); generating metadata of the CV algorithm (fig. 4B, item 445 comprising item 462; fig. 4C, item 445), and render images (fig. 4C, item 454; ¶53) including a location of a user’s head (¶36; ¶42; ¶64), location and orientation of the user’s hands (¶36; ¶64), and low-resolution depth map of a scene in front of the user (¶31, “the optional remote input devices correspond to fixed or movable sensory equipment within the physical environment 105 (e.g., image sensors, depth sensors, infrared (IR) sensors, event cameras, microphones, etc.)”; ¶59, “one or more depth sensors (e.g., structured light, time-of-flight, LiDAR, or the like)” – LiDAR sensors are low resolution).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined eyewear of Komaki, Gladkov, and Jennings, such that capabilities of the processor of Petrov are implemented on a system on chip, resulting in the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user, as taught by Petrov so as to detect or identify objects (¶115).
Komaki, Gladkov, Jennings, and Petrov combined do not mention a bus separate from the interface and configured to send metadata of the CV algorithm from the second SoC back to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the send SoC.
Rozen teaches eyewear (¶68), comprising: a frame (¶68, wearable headwear has a frame); a first system on a chip (SoC) (fig. 9, item 170; fig. 11, item 200; ¶77, “a processing element may include other elements on chip with the processor core 200”; fig. 12, item 1070: first processing element = first SoC; ¶78); and a second SoC (¶71, other SoCs not illustrated; fig. 12, item 1080: second processing element = second SoC; ¶78); and a bus (fig. 12, item 1050: bus; ¶79); wherein the bus is separate from interfaces (fig. 12, item 1096, 1092, 1076, and 1096; ¶84; ¶86) and configured to send instructions and parameters from the second SoC back to the first SoC (¶26, “For example, the second core 102b may send a message through the first queue 112a to the first core 102a, that includes an instruction and/or parameters to the first core 102a to execute the function F.sub.2 to modify the first data 106a”), wherein the second SoC is configured to instruct the first SoC via the bus to modify data (¶26, “The message may include all the required data (e.g., parameters and/or final data values) for the requested modification to the first data 106a, and then first core 102a may execute the function F.sub.2 to modify the first data 106a based on the required data”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, and Petrov, such that instructions and parameters correspond to metadata of the CV algorithm and to modify data corresponds to adjust and render a next image frame to be sent by the first SoC to the send SoC resulting in the eyewear comprising a bus separate from the interface and configured to send metadata of the CV algorithm from the second SoC back to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the send SoC, as taught by Rozen so as to increase efficiency in workload processing and increase battery life by distributing the workload thru SoC designation (¶2).
Gladkov teaches the first SoC performs video processing (¶106) and the second SoC is for outputting images (¶109). Komaki, Gladkov, Jennings, Petrov, and Rozen combined do not mention wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC.
Grossman teaches eyewear (fig. 3, item 300; ¶40 – augmented reality headset), comprising: a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov and Rozen, wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27) and is common to have metadata accompany images/video data (¶23).
With respect to Claim 3, claim 1 is incorporated, Komaki, Gladkov, Jennings, Petrov, and Rozen combined do not mention wherein the second SoC has a buffer configured to split-up the images and the metadata.
Grossman teaches eyewear (fig. 3, item 300; ¶40 – augmented reality headset), comprising: a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42); wherein the first SoC has a buffer configured to split-up the images and the metadata (¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov and Rozen, wherein either the first Soc or the second SoC or both has a buffer configured to split-up the images and the metadata, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 11, Komaki teaches a method (¶164; ¶166; claim 3) of use of eyewear (figs. 1 & 3, item 100; ¶29; ¶102) including a frame (figs. 1 & 3, item 101; ¶29) having a first side (fig. 3, item 107; ¶29) and a second side (fig. 3, item 106; ¶29) and configured to be worn by a user having a head and hands (¶43), a first processor (fig. 3, item 11; ¶68) adjacent the first side of the frame and coupled to a camera (figs. 1 and 7, coupled to item 13R: camera via electrical connections; ¶17; ¶102), a second processor (fig. 3, item 11*; ¶68) adjacent the second side and coupled to a display (figs. 3 and 7, item 12L; ¶35; coupled via electrical connections, display ~ display 12L; ¶65, “data processors 11 and 11* having the same functions”; ¶68), the second processor coupled to the first processor; and a bus (¶102, a system bus of the data processor 11 is connected to a display and a camera 13).
Komaki does not teach the first processor as a first system on a chip (SoC) and the second processor as a second SoC nor does Komaki teach coupling of the second processor to the first processor by an interface extending between the first and second SoCs.
Gladkov teaches a method (¶133; claim 14) of use of eyewear (fig. 2B, item 112; ¶63) including a frame (¶64, “front frame”); a first system on chip (SoC) (fig. 6, item 630B; ¶109); a second SoC (fig. 6, item 630C; ¶109); and coupling by an interface (fig. 6, item 684; ¶108) extending between the first and second SoCs.
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of Komaki, such that the first processor is a first system on a chip (SoC), the second processor is a second SoC, and coupling by an interface extending between the first and second SoCs, as taught by Gladkov so as to constrain rendering complexity at the application level in response to resource availability (¶10).
Komaki and Gladkov combined do not teach an interface comprising a mobile industry processor (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor for bulk image transmission, and a bus, the method comprising: the first SoC processing DSI images; the interface converting the DSI images to CSI images; and the interface unidirectionally sending the CSI images to the second SoC.
Jennings teaches a method of using an interface between a first processor/SoC and a second processor/SoC, the interface is a mobile industry processor interface (MIPI) (¶28) and the interface (fig. 2, item 212; ¶35; fig. 3D, item 382; ¶49) is configured to convert display serial interface (DSI) images of a first processor/SoC (fig. 2, item 210; ¶50, “compress or decompress data between an SoC (or processor) and display”) to camera serial interface (CSI) images (¶35; ¶49), wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor/SoC for bulk image transmission (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist), and a data bus (¶35), the method comprising: the first processor/SoC processing DSI images (¶35, “a processor or camera 210 … a compression method is used for enhancing efficiency of data transmission. For example, camera 210 includes a compressor 220 which compresses outgoing D-PHY data steam before transmission”); the interface converting the DSI images to CSI images (¶35, “A function of bridge device 212 is to bridge or convert data streams for transmission between D-PHY and C-PHY”); and the interface unidirectionally sending the CSI images to the second SoC (fig. 2; ¶35, “Bridge device 212 also includes a compressor 224 which compresses outgoing C-PHY data steam before transmission”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki and Gladkov, to utilize the interface of Jennings resulting in an interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor for bulk image transmission, and a data bus, the method comprising: the first SoC processing DSI images; the interface converting the DSI images to CSI images; and the interface unidirectionally sending the CSI images to the second SoC, so as to facilitate data conversion between data streams of D physical layer and data streams of C physical layer (¶7).
Komaki, Gladkov, and Jennings combined do not teach the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user.
Petrov teaches a method (figs. 7-8; ¶119; ¶141) of using an electronic device (fig. 3, item 120; ¶30, “the electronic device 120 includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached”), the method comprising: supporting a computer vision (CV) algorithm (¶84, “object recognition information 462 associated with physical objects recognized within the physical environment 105 (e.g., based on a classification algorithm, computer vision (CV) techniques, or the like)”; ¶119, “the one or more input devices correspond to a computer vision (CV) engine that uses an image stream from one or more exterior-facing image sensors, a finger/hand/extremity tracking engine, an eye tracking engine, a touch-sensitive surface, one or more microphones, and/or the like”); generating metadata of the CV algorithm (fig. 4B, item 445 comprising item 462; fig. 4C, item 445; ¶115; ¶120; ¶123; ¶131-132), and render images (fig. 4C, item 454; ¶53; ¶133) including a location of a user’s head (¶36; ¶42; ¶64), location and orientation of the user’s hands (¶36; ¶64), and low-resolution depth map of a scene in front of the user (¶31, “the optional remote input devices correspond to fixed or movable sensory equipment within the physical environment 105 (e.g., image sensors, depth sensors, infrared (IR) sensors, event cameras, microphones, etc.)”; ¶59, “one or more depth sensors (e.g., structured light, time-of-flight, LiDAR, or the like)” – LiDAR sensors are low resolution).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined method of Komaki, Gladkov, and Jennings, such that the method comprises the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user, as taught by Petrov so as to detect or identify objects (¶115).
Komaki, Gladkov, Jennings, and Petrov combined do not teach a bus separate from the interface and configured to send metadata of the CV algorithm back from the second SoC to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image to be sent by the first SoC to the second SoC such that a method comprises: the second SoC sending metadata of the CV algorithm back to the first SoC via the bus such that the first SoC adjusts and renders a next image to be sent to the second SoC, and the first SoC sending the next image to the second SoC.
Rozen teaches eyewear (¶68) and a method of use (figs. 2-5; ¶2-8), the eyewear comprising: a frame (¶68, wearable headwear has a frame); a first system on a chip (SoC) (fig. 9, item 170; fig. 11, item 200; ¶77, “a processing element may include other elements on chip with the processor core 200”; fig. 12, item 1070: first processing element = first SoC; ¶78); and a second SoC (¶71, other SoCs not illustrated; fig. 12, item 1080: second processing element = second SoC; ¶78); and a bus (fig. 12, item 1050: bus; ¶79); wherein the bus is separate from interfaces (fig. 12, item 1096, 1092, 1076, and 1096; ¶84; ¶86) and configured to send instructions and parameters from the second SoC back to the first SoC (¶26, “For example, the second core 102b may send a message through the first queue 112a to the first core 102a, that includes an instruction and/or parameters to the first core 102a to execute the function F.sub.2 to modify the first data 106a”), wherein the second SoC is configured to instruct the first SoC via the bus to modify data (¶26, “The message may include all the required data (e.g., parameters and/or final data values) for the requested modification to the first data 106a, and then first core 102a may execute the function F.sub.2 to modify the first data 106a based on the required data”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki, Gladkov, Jennings, and Petrov, such that instructions and parameters correspond to metadata of the CV algorithm and to modify data corresponds to adjust and render a next image frame to be sent by the first SoC to the send SoC resulting in the eyewear comprising a bus separate from the interface and configured to send metadata of the CV algorithm from the second SoC back to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the send SoC, as taught by Rozen so as to increase efficiency in workload processing and increase battery life by distributing the workload thru SoC designation (¶2).
Gladkov teaches the first SoC performs video processing (¶106) and the second SoC is for outputting images (¶109). Komaki, Gladkov, Jennings, Petrov and Rozen combined do not mention wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC.
Grossman teaches a method (¶60; ¶70; ¶90; claim 1) of use of eyewear (fig. 3, item 300; ¶40 – augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki, Gladkov, Jennings, Petrov and Rozen, wherein the interface is configured to transmit both images and metadata from the first SoC to the second SoC, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 13, claim 12 is incorporated, Komaki, Gladkov, Jennings, Petrov, and Rozen combined do not mention wherein second SoC has a buffer, the buffer splitting-up the images and the metadata.
Grossman teaches a method (¶60; ¶70; ¶90; claim 1) of use of eyewear (fig. 3, item 300; ¶40 – augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42); wherein a first SoC has a buffer, the buffer splitting-up the images and the metadata (¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki, Gladkov, Jennings, Petrov, and Rozen, wherein either the first Soc or the second SoC or both has a buffer, the buffer splitting-up the images and the metadata, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 18, Komaki teaches a non-transitory computer readable medium (fig. 7, item 11b; ¶99) including instructions for operating an eyewear device (figs. 1 & 3, item 100; ¶29; ¶102) including a frame (figs. 1 & 3, item 101; ¶29) having a first side (fig. 3, item 107; ¶29) and a second side (fig. 3, item 106; ¶29) and configured to be worn by a user having a head and hands (¶43), a first processor (fig. 3, item 11; ¶68) adjacent the first side of the frame, the first processor coupled to a camera (figs. 1 and 7, coupled to item 13R: camera via electrical connections; ¶17; ¶102), a second processor (fig. 3, item 11*; ¶68) adjacent the second side, the second processor coupled a display (figs. 3 and 7, item 12L; ¶35; coupled via electrical connections, display ~ display 12L; ¶65, “data processors 11 and 11* having the same functions”; ¶68), the second processor coupled to the first processor (via electronic components/elements); and a data bus (¶102, a system bus of the data processor 11 is connected to a display and a camera 13).
Komaki does not teach the first processor as a first system on a chip (SoC) and the second processor as a second SoC nor does Komaki teach coupling of the second processor to the first processor by an interface extending between the first and second SoCs.
Gladkov teaches a non-transitory computer readable medium (¶133) including instructions for operating an eyewear device (fig. 2B, item 112; ¶63) including a frame (¶64, “front frame”); a first system on chip (SoC) (fig. 6, item 630B; ¶109); a second SoC (fig. 6, item 630C; ¶109); and coupling by an interface (fig. 6, item 684; ¶108) extending between the first and second SoCs.
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the non-transitory computer readable medium of Komaki, such that the first processor is a first system on a chip (SoC), the second processor is a second SoC, and coupling by an interface extending between the first and second SoCs, as taught by Gladkov so as to constrain rendering complexity at the application level in response to resource availability (¶10).
Komaki and Gladkov combined do not teach the interface comprising a mobile industry processor interface (MIPI), the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first SoC to the second SoC for bulk image transmission, the method comprising: the first SoC to process DSI images; the interface to convert the DSI images to CSI images; and the interface to unidirectionally send the CSI images to the second SoC.
Jennings teaches a method of using an interface between a first processor/SoC and a second processor/SoC, the interface is a mobile industry processor interface (MIPI) (¶28) and the interface (fig. 2, item 212; ¶35; fig. 3D, item 382; ¶49) configured to convert display serial interface (DSI) images of a first processor/SoC (fig. 2, item 210; ¶50, “compress or decompress data between an SoC (or processor) and display”) to camera serial interface (CSI) images (¶35; ¶49), wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor/SoC for bulk image transmission (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist), and a data bus (¶35), the method comprising: the first processor/SoC to process DSI images (¶35, “a processor or camera 210 … a compression method is used for enhancing efficiency of data transmission. For example, camera 210 includes a compressor 220 which compresses outgoing D-PHY data steam before transmission”); the interface to convert the DSI images to CSI images (¶35, “A function of bridge device 212 is to bridge or convert data streams for transmission between D-PHY and C-PHY”); and the interface to unidirectionally send the CSI images to the second SoC (fig. 3D, items 396 and 398 are shown as unidirectionally with an arrow pointing in one direction only, bulk transmission is shown because multiple lanes/wires exist).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki and Gladkov, to utilize the interface of Jennings resulting in the interface is a mobile industry processor interface (MIPI) and the interface configured to convert display serial interface (DSI) images of the first SoC to camera serial interface (CSI) images, wherein the interface is configured to unidirectionally send the CSI images from the first processor/SoC to the second processor/SoC for bulk image transmission, the method comprising: the first SoC to process DSI images; the interface to convert the DSI images to CSI images; and the interface to unidirectionally send the CSI images to the second SoC, so as to facilitate data conversion between data streams of D physical layer and data streams of C physical layer (¶7).
Komaki, Gladkov, and Jennings combined do not teach the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user.
Petrov teaches an electronic device (fig. 3, item 120; ¶30, “the electronic device 120 includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached”), comprising: a processor (fig. 3, item 302; ¶58); and a computer readable storage medium including instructions for operating the electronic device (fig. 3, item 320; ¶62); the processor configured to: support a computer vision (CV) algorithm (¶84, “object recognition information 462 associated with physical objects recognized within the physical environment 105 (e.g., based on a classification algorithm, computer vision (CV) techniques, or the like)”; ¶119, “the one or more input devices correspond to a computer vision (CV) engine that uses an image stream from one or more exterior-facing image sensors, a finger/hand/extremity tracking engine, an eye tracking engine, a touch-sensitive surface, one or more microphones, and/or the like”); generating metadata of the CV algorithm (fig. 4B, item 445 comprising item 462; fig. 4C, item 445; ¶115; ¶120; ¶123; ¶131-132), and render images (fig. 4C, item 454; ¶53) including a location of a user’s head (¶36; ¶42; ¶64), location and orientation of the user’s hands (¶36; ¶64), and low-resolution depth map of a scene in front of the user (¶31, “the optional remote input devices correspond to fixed or movable sensory equipment within the physical environment 105 (e.g., image sensors, depth sensors, infrared (IR) sensors, event cameras, microphones, etc.)”; ¶59, “one or more depth sensors (e.g., structured light, time-of-flight, LiDAR, or the like)” – LiDAR sensors are low resolution).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined non-transitory computer readable medium of Komaki, Gladkov, and Jennings, such that the second SoC configured to support a computer vision (CV) algorithm, metadata of the CV algorithm, and render images including a location of a user’s head, location and orientation of the user’s hands, and low-resolution depth map of a scene in front of the user, as taught by Petrov so as to detect or identify objects (¶115).
Komaki, Gladkov, Jennings, and Petrov combined do not teach a bus separate from the interface and configured to send metadata of the CV algorithm back from the second SoC to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the second SoC such that a method comprises: the second SoC sending metadata of the CV algorithm back to the first SoC via the bus such that the first SoC adjusts and renders a next image to be sent to the second SoC, and the first SoC sending the next image to the second SoC.
Rozen teaches eyewear (¶68), comprising: a frame (¶68, wearable headwear has a frame); a first system on a chip (SoC) (fig. 9, item 170; fig. 11, item 200; ¶77, “a processing element may include other elements on chip with the processor core 200”; fig. 12, item 1070: first processing element = first SoC; ¶78); a non-transitory computer readable medium (¶36) including instructions for operating the eyewear; and a second SoC (¶71, other SoCs not illustrated; fig. 12, item 1080: second processing element = second SoC; ¶78); and a bus (fig. 12, item 1050: bus; ¶79); wherein the bus is separate from interfaces (fig. 12, item 1096, 1092, 1076, and 1096; ¶84; ¶86) and configured to send instructions and parameters from the second SoC back to the first SoC (¶26, “For example, the second core 102b may send a message through the first queue 112a to the first core 102a, that includes an instruction and/or parameters to the first core 102a to execute the function F.sub.2 to modify the first data 106a”), wherein the second SoC is configured to instruct the first SoC via the bus to modify data (¶26, “The message may include all the required data (e.g., parameters and/or final data values) for the requested modification to the first data 106a, and then first core 102a may execute the function F.sub.2 to modify the first data 106a based on the required data”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki, Gladkov, Jennings, and Petrov, such that instructions and parameters correspond to metadata of the CV algorithm and to modify data corresponds to adjust and render a next image frame to be sent by the first SoC to the send SoC resulting in the eyewear comprising a bus separate from the interface and configured to send metadata of the CV algorithm from the second SoC back to the first SoC, wherein the second SoC is configured to instruct the first SoC via the bus to adjust and render a next image frame to be sent by the first SoC to the send SoC, as taught by Rozen so as to increase efficiency in workload processing and increase battery life by distributing the workload thru SoC designation (¶2).
Gladkov teaches the first SoC performs video processing (¶106) and the second SoC is for outputting images (¶109). Komaki, Gladkov, Jennings, Petrov and Rozen combined do not mention wherein the interface is configured to communicate metadata with the images from the first SoC to the second SoC.
Grossman teaches an environment (fig. 3, item 300; ¶40 – includes an augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); a computer readable medium (fig. 3, item 323; ¶60); and an interface (¶36- MIPI); wherein the interface is configured to transmit both images and metadata to and from the first SoC (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki, Gladkov, Jennings, Petrov and Rozen, wherein the interface is configured to transmit both images and metadata from the first SoC to the second SoC, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
With respect to Claim 20, claim 19 is incorporated, Komaki, Gladkov, Jennings, Petrov, and Rozen combined do not mention wherein the instructions when executed by the eyewear device instruct a buffer of the second SoC to split-up the images and the metadata.
Grossman teaches a non-transitory computer readable medium (¶47) including instructions for operating an eyewear device (fig. 3, item 300; ¶40 – augmented reality headset) including a frame (¶40 – augmented reality headsets have a frame), a first system on chip (fig. 3, item 310; ¶38); and an interface (¶36- MIPI); wherein the instructions when executed by the eyewear device instruct a buffer of the first SoC to split-up the images and the metadata (¶20; ¶28; ¶35; ¶42).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Komaki, Gladkov, Jennings, Petrov, and Rozen, wherein the instructions when executed by the eyewear device instruct a buffer of the first SoC, a second SoC or both to split-up the images and the metadata, as taught by Grossman so as to produce video for augmented reality displays that have higher resolution where user eyes might focus and less resolution in peripheral vision zones (¶27).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman, as applied to claim 1 above, and further in view of Hill et al. (Pub. No.: US 20220335663 A1) hereinafter referred to as Hill.
Please note that dependent claim 5 is similarly rejected with respect to Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman.
With respect to Claim 5, claim 1 is incorporated, Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman, combined do not mention wherein the camera is a color camera, and further comprising a CV camera directly coupled to the second SoC, wherein the CV algorithm has direct access to the color images generated by the color camera.
Hill teaches a mobile device (fig. 1, item 10 and fig. 3, item 10a; ¶93; ¶109), comprising: a first system on chip (Soc) (fig. 3, item 18b; ¶103, “the data processor can comprise processing cores including (but not requiring) GPU cores and embedded processing for video codecs or AI operations, such as a NVidia® Tegra™ system on a chip (SoC)”; ¶109), the first SoC coupled to a camera (fig. 3, item 20b; ¶109); a second SoC (fig. 3, item 18a; ¶103, “the data processor can comprise processing cores including (but not requiring) GPU cores and embedded processing for video codecs or AI operations, such as a NVidia® Tegra™ system on a chip (SoC)”; ¶109), the second SoC coupled to the first SoC (fig. 3); wherein the camera is a color camera (fig. 3, item 20b; ¶98; ¶109), and further comprising a CV camera (fig. 3, item 20a, which is a CV camera because the images can be processed by a CV algorithm and the images enable the 3D reconstruction algorithm to scale an object size, which can inform the object detection algorithm when a candidate object is an appropriate size, and further make a more accurate 3D model – please also note that the specification of the instant application does not define CV camera and is not a commonly known term; ¶97; ¶105; ¶109) directly coupled to the second SoC, wherein a CV algorithm has direct access to the color images generated by the color camera (fig. 10: processes within item 18; ¶104; ¶148; ¶151; ¶152).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman, such that the second SoC has the functionality of item 18a of figure 10 of Hill resulting in wherein the camera is a color camera, and further comprising a CV camera directly coupled to the second SoC, wherein the CV algorithm has direct access to the color images generated by the color camera, as taught by Hill so as to enable fast and efficient processing of images (¶3).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman as applied to claim 1 above, and further in view of Sahu et al. (Pub. No.: US 2023/0123242 A1) hereinafter referred to as Sahu.
Please note that dependent claim 7 is similarly rejected with respect to Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman.
With respect to Claim 7, claim 1 is incorporated, Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman combined do not mention wherein only the first SoC comprises an operating system (OS).
Sahu teaches eyewear (fig. 1, item 102; fig. 2, item 200), comprising: a frame (fig. 1, item 102); a first system on a chip (SoC) (fig. 2, item 202; ¶55); and a second SoC (fig. 2, item 204; ¶55), wherein only the first SoC comprises an operating system (OS) (¶57).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman, wherein only the first SoC comprises an operating system (OS), as taught by Sahu so as to distribute workload for quicker processing.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman as applied to claim 1 above, and further in view of Gatson et al. (Patent No.: US 10,852,828 B1) hereinafter referred to as Gatson.
Please note that dependent claim 9 is similarly rejected with respect to Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman.
With respect to Claim 9, claim 1 is incorporated, Komaki, Gladkov, Jennings, Yamazaki, and Grossman, combined do not teach wherein the second SoC is configured to perform computer vision (CV) and visual odometry (VIO).
Gatson teaches eyewear (fig. 2), comprising: a frame and a first system on a chip (SoC) (column 9, lines 12-16); wherein the first SoC is configured to perform computer vision (CV) and visual odometry (VIO) (column 10, lines 1-15).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman, wherein the second SoC is configured to perform computer vision (CV) and visual odometry (VIO), as taught by Gatson so as to find natural visual landmarks in augmented reality applications (column 10, lines 9-14) and to implement such capabilities into the second SoC of Komaki, Gladkov, Jennings, and Rozen, so as to distribute the workload by defining the capabilities of each SoC as a matter of design choice.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman as applied to claim 1 above, and further in view of Rozen et al. (Pub. No.: US 2019/0087225 A1) hereinafter referred to as Rozen.
Please note that dependent claim 10 is similarly rejected with respect to Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman.
With respect to Claim 10, claim 1 is incorporated, Komaki teaches wherein the eyewear further comprises a first display (fig. 3, item 12R; ¶35, right display 12R; ¶47, display of image IM1) and a second display (fig. 3, item 12L; ¶35, left display 12L; ¶47, display of image IM2) configured to be controlled by the first processor (¶55).
Although Komaki does not specifically mention that the first display and the second display are configured to be controlled by the second SoC, the functionality of the first SoC and the second SoC are a matter of design choice.
Rozen teaches eyewear (¶68), comprising: a frame (¶68, wearable headwear has a frame); a first system on a chip (SoC) (fig. 9, item 170; fig. 11, item 200; ¶77, “a processing element may include other elements on chip with the processor core 200”; fig. 12, item 1070: first processing element = first SoC; ¶78); and a second SoC (¶71, other SoCs not illustrated; fig. 12, item 1080: second processing element = second SoC; ¶78); wherein the second SoC is configured to send data back to the first SoC by a bus (fig. 12, item P-P: multi-drop bus; ¶79) to adjust and render a next image (¶71, “the host processor 160 communicates with other SOCs (not illustrated) to complete the workload. For example, another SOC may be coupled to the SOC 170 through the network controller 174 to execute the workload and allow for communication between SOC 170 and the another SOC”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined eyewear of Komaki, Gladkov, Jennings, Petrov, Yamazaki, and Grossman, wherein the first display and the second display are configured to be controlled by the second SoC, as taught by Rozen so as to increase efficiency in workload processing and increase battery life by distributing the workload thru SoC designation (¶2).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman as applied to claim 11 above, and further in view of Hill.
Please note that dependent claim 15 is similarly rejected with respect to Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman.
With respect to Claim 15, claim 11 is incorporated, Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman combined do not mention wherein the camera is a color camera, and further comprising a CV camera directly coupled to the second SoC, wherein the CV algorithm has direct access to the color images generated by the color camera.
Hill teaches a mobile device (fig. 1, item 10 and fig. 3, item 10a; ¶93; ¶109) and method (fig. 11; ¶162), comprising: a first system on chip (Soc) (fig. 3, item 18b; ¶103, “the data processor can comprise processing cores including (but not requiring) GPU cores and embedded processing for video codecs or AI operations, such as a NVidia® Tegra™ system on a chip (SoC)”; ¶109), the first SoC coupled to a camera (fig. 3, item 20b; ¶109); a second SoC (fig. 3, item 18a; ¶103, “the data processor can comprise processing cores including (but not requiring) GPU cores and embedded processing for video codecs or AI operations, such as a NVidia® Tegra™ system on a chip (SoC)”; ¶109), the second SoC coupled to the first SoC (fig. 3); wherein the camera is a color camera (fig. 3, item 20b; ¶98; ¶109), and further comprising a CV camera (fig. 3, item 20a, which is a CV camera because the images can be processed by a CV algorithm and the images enable the 3D reconstruction algorithm to scale an object size, which can inform the object detection algorithm when a candidate object is an appropriate size, and further make a more accurate 3D model – please also note that the specification of the instant application does not define CV camera and is not a commonly known term; ¶97; ¶105; ¶109) directly coupled to the second SoC, wherein a CV algorithm has direct access to the color images generated by the color camera (fig. 10: processes within item 18; ¶104; ¶148; ¶151; ¶152).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Komaki, Gladkov, Jennings, Petrov, Rozen, and Grossman, such that the second SoC has the functionality of item 18a of figure 10 of Hill resulting in wherein the camera is a color camera, and further comprising a CV camera directly coupled to the second SoC, wherein the CV algorithm has direct access to the color images generated by the color camera, as taught by Hill so as to enable fast and efficient processing of images (¶3).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA V Bocar whose telephone number is (571)272-0955. The examiner can normally be reached Monday - Friday 8:30am to 5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr A Awad can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONNA V Bocar/Examiner, Art Unit 2621