Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-53 are cancelled. Claims 54-56, 66, 68-69, and 71 are amended. Currently claims 54-73 are under review.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 9, 2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 54-73 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 54-55, 68-69, and 71 are rejected under 35 U.S.C. 103 as being unpatentable over Goodson et al. (Pub. No.: US 2019/0295455 A1) hereinafter referred to as Goodson, in view of Kim et al. (Pub. No.: US 2021/0049974 A1) hereinafter referred to as Kim, and in view of Paolini et al. (Pub. No.: US 2013/0271498) hereinafter referred to as Paolini.
With respect to Claim 54, Goodson teaches a system (fig. 1A, item 110; ¶18) comprising: a display (fig. 1A, item 180a and/or 180b; ¶18); and a processor (fig. 1A, item 125; ¶19, “the local computing system 120 has components that include one or more general hardware processors (e.g., centralized processing unit, or “CPU”) 125 … an IDM (Image Data Manager) system 135 executes in memory 130 in order to perform at least some of the described techniques, such as by using the CPU(s) 125 and/or GPU(s) 144 to perform automated operations that implement those described techniques” – CPU stores instructions) storing instructions that, when executed, causes the processor to: analyze one or more contents and determine one or more contexts of the one or more contents (fig. 3, items 305, 310, and 315; ¶47, “dynamically determining the structure to use based on current context (e.g., received information about a portion of the image of emphasis, such as from gaze tracking of a viewer, information from a program generating or otherwise providing the image, etc.)”); determine one or more redundant portions (fig. 3, item 315; ¶13, “As another non-exclusive example, while some illustrated embodiments discuss an implementation of an embodiment of the described techniques that uses particular display rows and/or columns (e.g., one dimensional arrays) in particular manners, such as to copy pixel values in such a row and/or column in a particular secondary display region to one or more other adjacent or otherwise nearby rows and/or columns, other embodiments may implement the use of pixel values in 1-to-M mappings in other manners”; ¶14-15) and one or more relevant portions (fig. 3, item 310) of the one or more contents based on the one or more contexts of the one or more contents (¶13-15; ¶47, “dynamically determining the structure to use based on current context (e.g., received information about a portion of the image of emphasis, such as from gaze tracking of a viewer, information from a program generating or otherwise providing the image, etc.)”); wherein the processor determines the one or more relevant portions based on dynamically determining the structure of use of the one or more contents (¶47); identify one or more first pixels in the display that correspond to the one or more redundant portions of the one or more contents (fig. 3, item 315; ¶47); identify one or more second pixels in the display that corresponds to the one or more relevant portions of the one or more contents (fig. 3, item 310; ¶47); and communicate a command to the display to illuminate the one or more second pixels to display the one or more contents in response to the one or more contexts (fig. 3, items 320 and 325; ¶48).
Goodson does not explicitly teach communicate a command to the display to illuminate the one or more second pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts.
Kim teaches a system (fig. 1, display device; ¶29) comprising: a display (fig. 1, item 100; ¶29; ¶31) and a timing controller (fig. 1, item 400; ¶37-38) configured to: determine one or more redundant portion and one or more relevant portion of the one or more contents (¶39); identify one or more first pixels in the display that correspond to the one or more redundant portions of the one or more contents (¶39); identify the one or more second pixels in the display that corresponds to the one or more relevant portions of the one or more contents (¶57; ¶68 relevant when adjacent horizontal lines are different); and communicate a command to the display to illuminate the one or more pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts (¶39; ¶53).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of Goodson, to communicate a command to the display to illuminate the one or more second pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts, as taught by Kim so as to reduce power consumption (¶2).
Goodson and Kim combined do not teach dynamically determining the structure of use of the one or more contents is thru trained data sets of one or more key information, namely Goodson and Kim combined do not teach wherein the processor determines the one or more relevant portions based on trained data sets of one or more key information of the one or more contents.
Paolini teaches a system (fig. 3, item 300: mobile device; ¶42) comprising: a display (fig. 3); and a processor (fig. 11, item 1102; ¶47); the processor configured to determine one or more redundant portions and one or more relevant portions of one or more content (¶56, “the apparatus 1100 is provided with the content detector 1116 to identify content portions of one or more document pages having information (e.g., display elements) to be excluded from display to facilitate displaying zoomed-in views of document contents deemed of relatively more interest for display to a user”), wherein the processor determined the one or more relevant portions based on trained data sets of one or more key information of the one or more content (¶57, “the content detector 1116 is provided with image recognition capabilities, blank space detection capabilities, pattern matching capabilities, and text recognition capabilities” – image recognition systems use trained datasets).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson and Kim, to use image recognition capabilities to determine one or more relevant portions based on trained data sets of one or more key information of the one or more content, as taught by Paolini, resulting in wherein the processor determines the one or more relevant portions based on trained data sets of one or more key information of the one or more contents, so as to easily identify content of interest.
With respect to Claim 55, claim 54 is incorporated, Goodson teaches wherein the system receives the one or more contents from a source (fig. 1A, item 194: content storage; ¶20; ¶31).
With respect to Claim 68, claim 54 is incorporated, Goodson teaches wherein the processor receives a trigger signal from a hardware component and communicates the command to the display (¶39, the trigger signal is disparate criteria/identical criteria).
With respect to Claim 69, Goodson teaches a method (figs. 3-4; ¶46; ¶49) comprising: analyzing one or more contents and determining one or more contexts of the one or more contents (fig. 3, items 305, 310, and 315; ¶47, “dynamically determining the structure to use based on current context (e.g., received information about a portion of the image of emphasis, such as from gaze tracking of a viewer, information from a program generating or otherwise providing the image, etc.)”); determining one or more redundant portions (fig. 3, item 315; ¶13, “As another non-exclusive example, while some illustrated embodiments discuss an implementation of an embodiment of the described techniques that uses particular display rows and/or columns (e.g., one dimensional arrays) in particular manners, such as to copy pixel values in such a row and/or column in a particular secondary display region to one or more other adjacent or otherwise nearby rows and/or columns, other embodiments may implement the use of pixel values in 1-to-M mappings in other manners”; ¶14-15) and one or more relevant portions (fig. 3, item 310) of the one or more contents based on the one or more contexts of the one or more contents (¶13-15; ¶47, “dynamically determining the structure to use based on current context (e.g., received information about a portion of the image of emphasis, such as from gaze tracking of a viewer, information from a program generating or otherwise providing the image, etc.)”), wherein the one or more relevant portions are determined based on dynamically determining the structure of use of the one or more contents (¶47); identifying one or more first pixels in a display that corresponds to the one or more redundant portions of the one or more contents (fig. 3, item 315; ¶47); identifying one or more second pixels in the display that corresponds to the one or more relevant portions of the one or more contents (fig. 3, item 310; ¶47); and communicating a command to the display to illuminate the one or more second pixels to display the one or more contents in response to the one or more contexts (fig. 3, items 320 and 325; ¶48).
Goodson does not explicitly teach communicate a command to the display to illuminate the one or more second pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts.
Kim teaches a method (fig. 4; ¶47; ¶56) comprising: determining one or more redundant portion and one or more relevant portion of the one or more contents (fig. 4, item S404; ¶39; ¶56); identify one or more first pixels in the display that correspond to the one or more redundant portions of the one or more contents (fig. 4; item S404 – Yes; ¶39; ¶58); identify the one or more second pixels in the display that corresponds to the one or more relevant portions of the one or more contents (fig. 4, item S404 – No; ¶57; ¶68 relevant when adjacent horizontal lines are different); and communicate a command to the display to illuminate the one or more pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts (¶39; ¶53; ¶59).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of Goodson, to communicate a command to the display to illuminate the one or more second pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts, as taught by Kim so as to reduce power consumption (¶2).
Goodson and Kim combined do not teach dynamically determining the structure of use of the one or more contents is thru trained data sets of one or more key information, namely Goodson and Kim combined do not teach wherein the one or more relevant portions are determined based on trained data sets of one or more key information of the one or more contents.
Paolini teaches a system (fig. 3, item 300: mobile device; ¶42) comprising: a display (fig. 3); and a processor (fig. 11, item 1102; ¶47); the processor executing methods that are stored on a non-transitory computer readable medium: to determine one or more redundant portions and one or more relevant portions of one or more content (¶56, “the apparatus 1100 is provided with the content detector 1116 to identify content portions of one or more document pages having information (e.g., display elements) to be excluded from display to facilitate displaying zoomed-in views of document contents deemed of relatively more interest for display to a user”), wherein the processor determined the one or more relevant portions based on trained data sets of one or more key information of the one or more content (¶57, “the content detector 1116 is provided with image recognition capabilities, blank space detection capabilities, pattern matching capabilities, and text recognition capabilities” – image recognition systems use trained datasets).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Goodson and Kim, to use image recognition capabilities to determine one or more relevant portions based on trained data sets of one or more key information of the one or more content, as taught by Paolini, resulting in wherein the processor determines the one or more relevant portions based on trained data sets of one or more key information of the one or more contents, so as to easily identify content of interest.
With respect to Claim 71, Goodson teaches a non-transitory computer readable medium (¶23, “one or more processors or other configured hardware circuitry and/or memory and/or storage, such as when configured by one or more software programs (e.g., by the system 135 and/or it components) and/or data structures, such as by execution of software instructions of the one or more software programs and/or by storage of such software instructions and/or data structures”) storing a sequence of instructions, which when executed by a processor (fig. 1A, item 125; ¶19, “the local computing system 120 has components that include one or more general hardware processors (e.g., centralized processing unit, or “CPU”) 125 … an IDM (Image Data Manager) system 135 executes in memory 130 in order to perform at least some of the described techniques, such as by using the CPU(s) 125 and/or GPU(s) 144 to perform automated operations that implement those described techniques” – CPU stores instructions) causes: analyzing one or more contents and determining one or more contexts of the one or more contents (fig. 3, items 305, 310, and 315; ¶47, “dynamically determining the structure to use based on current context (e.g., received information about a portion of the image of emphasis, such as from gaze tracking of a viewer, information from a program generating or otherwise providing the image, etc.)”); determining one or more redundant portions (fig. 3, item 315; ¶13, “As another non-exclusive example, while some illustrated embodiments discuss an implementation of an embodiment of the described techniques that uses particular display rows and/or columns (e.g., one dimensional arrays) in particular manners, such as to copy pixel values in such a row and/or column in a particular secondary display region to one or more other adjacent or otherwise nearby rows and/or columns, other embodiments may implement the use of pixel values in 1-to-M mappings in other manners”; ¶14-15) and one or more relevant portions (fig. 3, item 310) of the one or more contents based on the one or more contexts of the one or more contents (¶13-15; ¶47, “dynamically determining the structure to use based on current context (e.g., received information about a portion of the image of emphasis, such as from gaze tracking of a viewer, information from a program generating or otherwise providing the image, etc.)”), wherein the one or more relevant portions are determined based on dynamically determining the structure of use of the one or more contents (¶47) trained data sets of one or more key information of the one or more contents; identifying one or more first pixels in a display that corresponds to the one or more redundant portions of the one or more contents (fig. 3, item 315; ¶47); identifying one or more second pixels in the display that corresponds to the one or more relevant portions of the one or more contents (fig. 3, item 310; ¶47); and communicating a command to a display control module to illuminate the one or more second pixels to display the one or more contents in response to the one or more contexts (fig. 3, items 320 and 325; ¶48).
Goodson does not explicitly teach communicate a command to the display to illuminate the one or more second pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts.
Kim teaches a method (fig. 4; ¶47; ¶56) comprising: determining one or more redundant portion and one or more relevant portion of the one or more contents (fig. 4, item S404; ¶39; ¶56); identify one or more first pixels in the display that correspond to the one or more redundant portions of the one or more contents (fig. 4; item S404 – Yes; ¶39; ¶58); identify the one or more second pixels in the display that corresponds to the one or more relevant portions of the one or more contents (fig. 4, item S404 – No; ¶57; ¶68 relevant when adjacent horizontal lines are different); and communicate a command to the display to illuminate the one or more pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts (¶39; ¶53; ¶59).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the non-transitory computer readable medium of Goodson, to communicate a command to the display to illuminate the one or more second pixels to display the one or more contents with optimized energy consumption in response to the one or more contexts, as taught by Kim so as to reduce power consumption (¶2).
Goodson and Kim combined do not teach dynamically determining the structure of use of the one or more contents is thru trained data sets of one or more key information, namely Goodson and Kim combined do not teach wherein the one or more relevant portions are determined based on trained data sets of one or more key information of the one or more contents.
Paolini teaches a system (fig. 3, item 300: mobile device; ¶42) comprising: a display (fig. 3); and a processor (fig. 11, item 1102; ¶47); the processor executing methods that are stored on a non-transitory computer readable medium: to determine one or more redundant portions and one or more relevant portions of one or more content (¶56, “the apparatus 1100 is provided with the content detector 1116 to identify content portions of one or more document pages having information (e.g., display elements) to be excluded from display to facilitate displaying zoomed-in views of document contents deemed of relatively more interest for display to a user”), wherein the processor determined the one or more relevant portions based on trained data sets of one or more key information of the one or more content (¶57, “the content detector 1116 is provided with image recognition capabilities, blank space detection capabilities, pattern matching capabilities, and text recognition capabilities” – image recognition systems use trained datasets).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Goodson and Kim, to use image recognition capabilities to determine one or more relevant portions based on trained data sets of one or more key information of the one or more content, as taught by Paolini, resulting in wherein the processor determines the one or more relevant portions based on trained data sets of one or more key information of the one or more contents, so as to easily identify content of interest.
Claims 56-59, 65-67, 70, and 72 are rejected under 35 U.S.C. 103 as being unpatentable over Goodson, Kim, and Poalini as applied to claims 54, 69, and 71 above, and further in view of Imamura et al. (Pub. No.: US 2023/0234523 A1) hereinafter referred to as Imamura.
With respect to Claim 56, claim 54 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the system comprises a sensor that determines one or more interior features and one or more exterior features of an entity.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the system comprises a sensor that determines an interior feature and an exterior feature of an entity, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 57, claim 56 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the entity comprises a vehicle.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); wherein the entity comprises a vehicle (¶273).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the entity comprises a vehicle, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 58, claim 56 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the one or more interior features comprises a position of one or more seats within a vehicle and a position of one or more occupants within the vehicle.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); wherein the one or more interior features comprises a position of one or more seats within a vehicle and a position of one or more occupants within the vehicle (¶273-274; ¶276-277).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the one or more interior features comprises a position of one or more seats within a vehicle and a position of one or more occupants within the vehicle, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 59, claim 56 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the one or more exterior features comprises an ambient lighting condition around a vehicle, a scene around the vehicle, a parking location of the vehicle, and a mobility information of the vehicle.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); wherein the one or more exterior features comprises location of the vehicle (¶278, “a current location of the vehicle”), and a mobility information of the vehicle (¶302, “operation information and the like associated with map data are also recorded”; ¶312).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the one or more exterior features comprises location of the vehicle, and a mobility information of the vehicle, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 65, claim 56 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the system further comprises a display alignment and orientation module.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); wherein the system further comprises a display alignment and orientation module (¶294-296).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the system further comprises a display alignment and orientation module, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 66, claim 65 is incorporated, Shimura and Benesh combined do not mention wherein the sensor determines position of one or more occupants and a movement of a vehicle.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); wherein the system further comprises a display alignment and orientation module (¶294-296); wherein the sensor determines position of one or more occupants (via items 101 and 102 of fig. 12; ¶72; ¶275; ¶341; ¶412) and a movement of a vehicle (¶312, “data obtained by accumulating user information acquired in time series in accordance with a location where the vehicle is moving”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the sensor determines position of one or more occupants and a movement of a vehicle, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 67, claim 66 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the display alignment and orientation module automatically aligns, and orients the display based on a user input.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); wherein the system further comprises a display alignment and orientation module (¶294-296); wherein the sensor determines position of one or more occupants (via items 101 and 102 of fig. 12; ¶72; ¶275; ¶341; ¶412) and a movement of a vehicle (¶312, “data obtained by accumulating user information acquired in time series in accordance with a location where the vehicle is moving”); wherein the display alignment and orientation module automatically aligns, and orients the display based on a user input (fig. 7C; 8C; 9C; 10C; 11C; ¶277; ¶341-342).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the display alignment and orientation module automatically aligns, and orients the display based on a user input, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 70, claim 69 is incorporated, Goodson, Kim, and Poalini combined do not mention further comprising: automatically aligning, and orienting the display based on a user input.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); further comprising: automatically aligning, and orienting the display based on a user input (fig. 7C; 8C; 9C; 10C; 11C; ¶277; ¶341-342).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Goodson, Kim, and Poalini, such that method is performed by a processor located in a vehicle resulting in further comprising: automatically aligning, and orienting the display based on a user input, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
With respect to Claim 72, claim 71 is incorporated, Goodson, Kim, and Poalini combined do not mention further causes: automatically aligning, and orienting the display based on a user input.
Imamura teaches a system (fig. 12; ¶269) comprising: a display (fig. 12, items 108 and 109; ¶288); a display control module (fig. 12, items 124 and 124; ¶292-293); and a processor (fig. 12, item 104; ¶271) executing stored instructions (fig. 12, item 105; ¶300); wherein the system comprises a sensor (fig. 12, items 101, 102, 103) that determines one or more interior features and one or more exterior features of an entity (¶276; ¶278-279; ¶341; ¶412; the one features corresponds to the positions of the occupants; an exterior feature is a vehicle location; a vehicle corresponds to an entity); further comprises causes: automatically aligning, and orienting the display based on a user input (fig. 7C; 8C; 9C; 10C; 11C; ¶277; ¶341-342).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined non-transitory computer readable medium of Goodson, Kim, and Poalini, such that analyzing, determining, identifying, and communicating is performed in a vehicle resulting in further comprising: automatically aligning, and orienting the display based on a user input, as taught by Imamura, so as to provide a system capable of optimum illumination control and video projection according to an action or a state of each occupant inside a vehicle (¶11).
Claims 62-64 are rejected under 35 U.S.C. 103 as being unpatentable over Goodson, Kim, and Poalini as applied to claim 54 above, and further in view of Goluguri (Pub. No.: US 2020/0178073 A1).
With respect to Claim 62, claim 54 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the system comprising an artificial intelligence engine comprises a natural language processing (NLP) engine.
Goluguri teaches a system (fig. 2, item 200; ¶17) comprising: a display (figs. 1-2, item 124a and 124b; ¶15); a display control module (fig. 2, item 202); and a processor (fig. 2, item 202; ¶20) executing instructions that, when executed, causes the processor to: analyze one or more contents and determine one or more contexts of the one or more contents (¶20, “Specifically, each of the modules may operate as a node that may send and/or receive data”; ¶22, “the virtual assistance module 208 which may process messages including spoken and written message data received from external networks (e.g., the server 224, the mobile device 220, or the second vehicle 232) and may deliver the message to an intended recipient (i.e., a vehicle occupant) based on private content of the message”; ¶29, “the vehicle virtual assistance system 200 comprises the display 124 for providing visual output such as, for example, messages and other information, entertainment, maps, navigation, information, or a combination thereof”); wherein the system comprising an artificial intelligence engine comprises a natural language processing (NLP) engine (¶27).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the system comprising an artificial intelligence engine comprises a natural language processing (NLP) engine, as taught by Goluguri, so as to provide classify content for privacy (¶28).
With respect to Claim 63, claim 62 is incorporated, Goodson, Kim, and Poalini combined do not mention wherein the artificial intelligence engine analyzes the one or more contents and extracts one or more key information based on at least one of tokenization, part-of-speech (POS) tagging, named entity recognition (NER), dependency parsing, sentiment analysis, and text classification.
Goluguri teaches a system (fig. 2, item 200; ¶17) comprising: a display (figs. 1-2, item 124a and 124b; ¶15); a display control module (fig. 2, item 202); and a processor (fig. 2, item 202; ¶20) executing instructions that, when executed, causes the processor to: analyze one or more contents and determine one or more contexts of the one or more contents (¶20, “Specifically, each of the modules may operate as a node that may send and/or receive data”; ¶22, “the virtual assistance module 208 which may process messages including spoken and written message data received from external networks (e.g., the server 224, the mobile device 220, or the second vehicle 232) and may deliver the message to an intended recipient (i.e., a vehicle occupant) based on private content of the message”; ¶29, “the vehicle virtual assistance system 200 comprises the display 124 for providing visual output such as, for example, messages and other information, entertainment, maps, navigation, information, or a combination thereof”); wherein the system comprising an artificial intelligence engine comprises a natural language processing (NLP) engine (¶27); wherein the artificial intelligence engine analyzes the one or more contents and extracts one or more key information based on at least one of part-of-speech (POS) tagging, named entity recognition (NER), parsing, and sentiment analysis (¶27).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle resulting in wherein the artificial intelligence engine analyzes the one or more contents and extracts one or more key information based on at least one of part-of-speech (POS) tagging, named entity recognition (NER), parsing, and sentiment analysis, as taught by Goluguri, so as to provide classify content for privacy utilizing artificial intelligence (¶28).
With respect to Claim 64, claim 63 is incorporated, Goodson, Kim, and Poalini combined do not explicitly mention wherein the natural language processing (NLP) engine determines the one or more relevant portions and the one or more redundant portions based on trained datasets of extraction of the one or more key information.
Goluguri teaches a system (fig. 2, item 200; ¶17) comprising: a display (figs. 1-2, item 124a and 124b; ¶15); a display control module (fig. 2, item 202); and a processor (fig. 2, item 202; ¶20) executing instructions that, when executed, causes the processor to: analyze one or more contents and determine one or more contexts of the one or more contents (¶20, “Specifically, each of the modules may operate as a node that may send and/or receive data”; ¶22, “the virtual assistance module 208 which may process messages including spoken and written message data received from external networks (e.g., the server 224, the mobile device 220, or the second vehicle 232) and may deliver the message to an intended recipient (i.e., a vehicle occupant) based on private content of the message”; ¶29, “the vehicle virtual assistance system 200 comprises the display 124 for providing visual output such as, for example, messages and other information, entertainment, maps, navigation, information, or a combination thereof”); wherein the system comprising an artificial intelligence engine comprises a natural language processing (NLP) engine (¶27); wherein the artificial intelligence engine analyzes the one or more contents and extracts one or more key information based on at least one of part-of-speech (POS) tagging, named entity recognition (NER), parsing, and sentiment analysis (¶27); wherein the natural language processing (NLP) engine determines the one or more relevant portions (¶27, “Natural language processing may also include semantics detection and analysis and any other analysis of data including textual data and unstructured data”; ¶22; ¶29, “the vehicle virtual assistance system 200 comprises the display 124 for providing visual output such as, for example, messages and other information, entertainment, maps, navigation, information, or a combination thereof”) and the one or more redundant portions based on trained datasets of extraction of the one or more key information.
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of Goodson, Kim, and Poalini, such that the system is located in a vehicle wherein the natural language processing (NLP) engine determines the one or more relevant portions (such as displaying messages that do not have private content) and by the combination of analyzing a video signal and a natural language processing language the system results in the one or more redundant portions based on trained datasets of extraction of the one or more key information, as taught by Goluguri, so as to provide classify content for privacy utilizing artificial intelligence (¶28).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA V Bocar whose telephone number is (571)272-0955. The examiner can normally be reached Monday - Friday 8:30am to 5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr A Awad can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONNA V Bocar/Examiner, Art Unit 2621